Machine learning Archives | Microsoft AI Blogs http://approjects.co.za/?big=en-us/ai/blog/topic/machine-learning/ Wed, 01 Apr 2026 20:04:12 +0000 en-US hourly 1 Enable accelerated growth with confidence: A Forrester TEI study projects more than 200% ROI over three years and six-month payback using Dynamics 365 Business Central http://approjects.co.za/?big=en-us/dynamics-365/blog/business-leader/2026/03/31/enable-accelerated-growth-with-confidence-a-forrester-tei-study-projects-more-than-200-roi-over-three-years-and-six-month-payback-using-dynamics-365-business-central/ Tue, 31 Mar 2026 17:00:00 +0000 Microsoft commissioned Forrester Consulting to evaluate the potential return on investment organizations may realize by deploying Business Central.

The post Enable accelerated growth with confidence: A Forrester TEI study projects more than 200% ROI over three years and six-month payback using Dynamics 365 Business Central appeared first on Microsoft AI Blogs.

]]>
Growth is exciting—but it introduces complexity. 

As small and midsize businesses scale, finance and operations become harder to manage. Transactions increase, reporting requirements expand, and disconnected systems start to strain visibility and margins. What once worked becomes a constraint. 

A newly published Forrester Total Economic Impact™ (TEI) study helps quantify what many organizations are already experiencing: modernizing on Microsoft Dynamics 365 Business Central delivers measurable financial impact.

Microsoft commissioned Forrester Consulting to evaluate the potential return on investment organizations may realize by deploying Business Central. Based on interviews with four business decision makers, which were aggregated to model a fictitious composite organization1, the study projected that the composite organization could potentially realize: 

  • More than 200% return on investment (ROI) over three years 
  • An estimated $460K net present value (NPV) over three years 
  • Potential payback in six months 

Across finance productivity, enterprise resource planning (ERP) consolidation, improved profitability, and reporting efficiency, the composite organization modeled by Forrester realized more than $680K in three‑year, risk‑adjusted present value benefits

These outcomes reflect the potential impact of modernizing finance and operations on a single, integrated cloud ERP platform—while also establishing the foundation for AI‑powered experiences like Microsoft 365 Copilot and intelligent agents. 

Where the value comes from 

The Forrester TEI study highlights several areas where organizations can potentially realize tangible, risk‑adjusted benefits when using Business Central. 

Support faster, more efficient finance operations 

Manual processes often slow growing organizations. Interviewed customers reported meaningful efficiency gains across accounts payable (AP), accounts receivable (AR), billing, and financial close. 

By year three, the composite organization was projected to potentially achieve: 

  • Up to 30% reduction in monthly close time 
  • Up to 50% time savings for AP, AR, and billing activities 

These improvements translated to more than $215K in present value over three years in finance productivity alone for the composite organization—allowing teams to shift focus from reconciliation to higher‑value analysis. 

Standardizing data and workflows in Business Central also creates the conditions necessary for AI‑enabled automation. While AI benefits were not independently quantified in this study, interviewees noted that unified processes accelerate the adoption of Copilot‑supported approvals, variance analysis, and exception handling. 

Lower total cost of ownership through ERP consolidation 

Interviewees reported operating aging on‑premises ERP systems alongside spreadsheets and disconnected point solutions. Consolidating onto Business Central reduced infrastructure complexity and IT overhead. 

The study projected the following potential benefits for the composite organization over three years: 

  • More than 10% reduction in total cost of ownership (TCO) 
  • More than $170K in present value savings from retired systems and reduced maintenance 

Beyond direct savings, simplification can reduce operational risk and improved scalability—allowing organizations to grow without layering on new systems to compensate for gaps. 

Enable improved profitability through better visibility 

Unified, real‑time visibility across finance and operations enables faster, more informed decisions. 

By year three, the composite organization was modeled to potentially experience: 

  • Up to 3% improvement in net profit margins 
  • More than $245K in present value from improved profitability 

Better insight into costs, projects, and performance enables earlier course correction. AI‑powered experiences such as Copilot can further assist by surfacing cost variances, project overruns, or unbilled work sooner. While AI alone does not drive margin improvement, modern ERP data and standardized processes strengthen an organization’s ability to act with precision. 

Fast reporting and audit readiness 

Business Central’s integrated data model and native Microsoft Power BI capabilities streamlined reporting and audit preparation. 

Based on modeling by Forrester, by year three, the composite organization was projected to potentially reduce: 

  • Audit preparation time by up to 30% 
  • Time spent creating internal and executive reports 

These projected improvements were valued at nearly $50K in present value, while also enabling increased confidence in data accuracy and consistency. 

Beyond the numbers: Building an AI-ready foundation

In addition to quantified financial outcomes, interviewees highlighted broader operational improvements, including enhanced customer experience, reduced days sales outstanding (DSO), better warehouse management, and a more intuitive user experience. 

Just as importantly, Business Central provides an AI‑ready ERP foundation. 

By unifying finance and operations data and aligning processes to best practices, organizations are better positioned to leverage Copilot, Power BI, and intelligent agents to: 

  • Help reduce time-to-insight—not just report creation
  • Surface anomalies and trends quickly 
  • Enable more proactive, data‑driven decision‑making 

While AI‑driven outcomes were not directly measured in this TEI, the study reinforces a critical principle: realizing AI value at scale depends on clean data, integrated systems, and standardized processes. Business Central delivers that foundation. 

Read the full study 

For SMB‑focused organizations and partners evaluating ERP modernization, the full Forrester TEI study provides a detailed financial framework to help quantify potential projected value—grounded in customer interviews and risk‑adjusted modeling. 

Join us at Directions North America 2026 

The Forrester TEI study highlights the potential value organizations may realize with Business Central—from faster financial processes to improved profitability and lower total cost of ownership. 

We’ll continue these conversations at Directions North America 2026, where partners, Microsoft engineering, and product leaders come together to discuss what’s next for the Business Central ecosystem and the future of AI-powered ERP. 

Join me for the keynote, where I’ll explore how Business Central is evolving into an AI-powered system of action with Copilot and intelligent agents. 

Attendees will: 

  • Gain insight into the Business Central roadmap 
  • Prepare for upcoming AI-powered capabilities 
  • Connect directly with Microsoft engineering and product experts 
  • Engage with partners across the Business Central community 

  1. Composite organization assumption: Results are based on a Forrester modeled composite organization derived from customer interviews, with $50M in annual revenue, 300 employees, 15 core finance and accounting users, and 100 light users, using Dynamics 365 Business Central in a cloud deployment. All quantified benefits represent the three-year, risk-adjusted present value for the composite organization.

The post Enable accelerated growth with confidence: A Forrester TEI study projects more than 200% ROI over three years and six-month payback using Dynamics 365 Business Central appeared first on Microsoft AI Blogs.

]]>
Highlights from FabCon and SQLCon 2026: Where databases and Fabric come together http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2026/03/30/highlights-from-fabcon-and-sqlcon-2026-where-databases-and-fabric-come-together/ Mon, 30 Mar 2026 19:00:00 +0000 From technical sessions to hallway conversations, FabCon and SQLCon 2026 showcased the momentum behind the Fabric and SQL ecosystems.

The post Highlights from FabCon and SQLCon 2026: Where databases and Fabric come together appeared first on Microsoft AI Blogs.

]]>
FabCon is back for its third year, and this time it’s different.

For the first time, SQLCon joined the event, bringing the Microsoft Fabric and SQL communities together in Atlanta for an unforgettable week of learning, growing, and connecting.

More than 8,000 attendees gathered across nearly 300 sessions to explore the latest in Fabric and SQL. Some of the most valuable moments did not happen on stage. They happened through conversations and shared experiences across the community.

Bird's eye view of Fab Con and S Q L Con 2026.

That is what makes FabCon special. It is not just what you learn. It is who you learn it with.

From packed sessions to impromptu discussions in the hallways, FabCon and SQLCon 2026 reflected the growing energy and momentum behind the Fabric and SQL ecosystems.

Here are some of the highlights from this year’s event.

If you haven’t already, check out Arun Ulag’s hero blog “FabCon and SQLCon 2026: Unifying databases and Fabric on a single, complete platform” for a complete look at all of our FabCon and SQLCon announcements across both Fabric and our database offerings.

A week of learning and collaboration

The week began with pre-conference workshops that allowed attendees to dive deeper into Microsoft Fabric and SQL through hands-on learning.

These sessions covered topics ranging from data engineering and real-time analytics to governance, AI, and advanced SQL scenarios. Participants worked directly with product teams and community experts to explore best practices and build new skills that they could bring back to their organizations.

Two speakers having a conversation on stage.

For many attendees, the workshops set the tone for the entire conference. They offered not only technical depth, but also early opportunities to meet peers, exchange ideas, and start conversations that continued throughout the week.

As the main conference kicked off, keynotes and breakout sessions explored how databases, analytics, real-time data, semantic intelligence, BI, and AI are coming together on Microsoft Fabric, and how organizations are using the platform to turn insights into action faster.

The Expo Hall experience

At the center of the Expo Hall, the Microsoft booth became a natural gathering point throughout the week.

It was a place to connect, explore, and take a break between sessions. Attendees grabbed coffee served by a robot, sampled custom Fabric and SQL-themed Coca-Cola flavors dispensed from Freestyle machines, and spent time with engineers and product teams while picking up a bit of conference swag.

The mix of hands-on experiences and casual conversations made it one of the most active and engaging spaces at the event.

People talking at the Expo Hall.

Nearby, the Ask the Experts area gave attendees the chance to connect directly with Microsoft engineers, product managers, and community leaders. These one-on-one discussions helped participants get guidance on everything from Fabric workloads and SQL optimization to governance, security, and AI strategy. Not to mention the wealth of partners also represented in the Expo Hall allowing attendees to engage with the expertise gathered across the ecosystem.

These moments of direct engagement often turned into some of the most valuable conversations of the week.

Competition, creativity, and hands-on challenges

FabCon wouldn’t be complete without a little friendly competition.

Throughout the week, attendees tested their skills in Fast at Fabric, a timed analytics challenge where participants worked through real-world scenarios using Microsoft Fabric. Competitors analyzed data, validated assumptions, and generated insights while climbing a live leaderboard. The experience blended learning with competition in an engaging format.

The Trivia Challenge in the Microsoft booth also kept the energy high. Attendees tested their knowledge of Fabric and SQL in a fast-paced game that mixed learning with plenty of laughs. Interactive dashboards showcased Fabric capabilities while participants competed for prizes and bragging rights.

These experiences turned technical learning into something interactive and memorable while encouraging attendees to dive deeper into the platform.

Celebrating creativity and storytelling with data

The DataViz World Championships delivered one of the most anticipated and electric moments of the week.

As the live finale of an international Power BI competition, the event brought four finalists to the stage to build visualizations in real time using a brand-new dataset. In front of a live audience, they combined creativity, technical skill, and storytelling to turn raw data into compelling insights.

Excited attendees in their seats.

By the end of the session, one finalist earned the title of 2026 Power BI DataViz World Champion, closing out the competition with one of the most memorable moments of the conference.

Community at the center of the experience

At the heart of FabCon was the Community Lounge, a welcoming space dedicated to connection and community-led experiences.

Throughout the week, the lounge hosted meetups, community theater sessions, certification guidance, and informal gatherings where attendees could exchange ideas and build relationships. It became a natural meeting point where conversations continued long after sessions ended.

The event also featured a Women in Data luncheon and a DEI luncheon, which created space for discussions around leadership, resilience, and inclusive growth within the technology community. These sessions encouraged thoughtful dialogue and helped strengthen connections among attendees from diverse backgrounds.

Together, these experiences reinforced what makes the Fabric community unique: a culture of openness, collaboration, and shared learning.

Unforgettable moments from the week

FabCon is known for creating memorable experiences, and this year delivered plenty of them. Power Hour, a fan-favorite event, brought together Microsoft product teams and community leaders for an energetic session filled with creative demos, storytelling, and unexpected surprises.

Attendee taking selfie while in the Power Hour line.
Attendee taking selfie with the Atlanta Hawks mascot.

Meanwhile, FabCon TV broadcast live episodes from the Expo Hall throughout the event. Interviews with speakers, community members, and product experts captured the excitement of the conference and will continue to reach audiences through recordings shared after the event. Be sure to catch these episodes as they’re released on the Fabric YouTube channel!

Attendees talking, aquarium in background.

To celebrate the week, attendees gathered at the world-famous Georgia Aquarium to take in the unique surroundings and connect. Conversations flowed easily between admiring the unique marine ecosystem and the next Fabric and SQL project, offering a memorable backdrop for new ideas and relationships to take shape.

Building on the energy of FabCon and SQLCon

FabCon and SQLCon 2026 highlighted the incredible momentum behind the global data community.

Over the course of the week, thousands of attendees learned new skills, built connections, and explored how Fabric and SQL technologies are shaping the future of data and AI.

Most importantly, the event celebrated the people who make this ecosystem so vibrant. The ideas, insights, and collaborations that began in Atlanta will continue long after the conference ends.

Thank you to everyone who joined us this year. We look forward to seeing what the Fabric and SQL communities build next and we’re already excited to see everyone at the next FabCon SQLCon in Spring 2027.

Explore additional Fabric resources:

The post Highlights from FabCon and SQLCon 2026: Where databases and Fabric come together appeared first on Microsoft AI Blogs.

]]>
Building the future together: Microsoft and NVIDIA announce AI advancements at GTC DC https://azure.microsoft.com/en-us/blog/building-the-future-together-microsoft-and-nvidia-announce-ai-advancements-at-gtc-dc/ Tue, 28 Oct 2025 18:30:00 +0000 New offerings in Azure AI Foundry give businesses an enterprise-grade platform to build, deploy, and scale AI applications and agents.

The post Building the future together: Microsoft and NVIDIA announce AI advancements at GTC DC appeared first on Microsoft AI Blogs.

]]>
Microsoft and NVIDIA are deepening our partnership to power the next wave of AI industrial innovation. For years, our companies have helped fuel the AI revolution, bringing the world’s most advanced supercomputing to the cloud, enabling breakthrough frontier models, and making AI more accessible to organizations everywhere. Today, we’re building on that foundation with new advancements that deliver greater performance, capability, and flexibility.

With added support for NVIDIA RTX PRO 6000 Blackwell Server Edition on Azure Local, customers can deploy AI and visual computing workloads distributed and edge environments with the seamless orchestration and management you use in the cloud. New NVIDIA Nemotron and NVIDIA Cosmos models in Azure AI Foundry give businesses an enterprise-grade platform to build, deploy, and scale AI applications and agents. With NVIDIA Run:ai on Azure, enterprises can get more from every GPU to streamline operations and accelerate AI. Finally, Microsoft is redefining AI infrastructure with the world’s first deployment of NVIDIA GB300 NVL72.

Today’s announcements mark the next chapter in our full-stack AI collaboration with NVIDIA, empowering customers to build the future faster.

Expanding GPU support to Azure Local

Microsoft and NVIDIA continue to drive advancements in artificial intelligence, offering innovative solutions that span the public and private cloud, the edge, and sovereign environments.

As highlighted in the March blog post for NVIDIA GTC, Microsoft will offer NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on Azure. Now, with expanded availability of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on Azure Local, organizations can optimize their AI workloads, regardless of location, to provide customers with greater flexibility and more options than ever. Azure Local leverages Azure Arc to empower organizations to run advanced AI workloads on-premises while retaining the management simplicity of the cloud or operating in fully disconnected environments. 

NVIDIA RTX PRO 6000 Blackwell GPUs provide the performance and flexibility needed to accelerate a broad range of use cases, from agentic AI, physical AI, and scientific computing to rendering, 3D graphics, digital twins, simulation, and visual computing. This expanded GPU support unlocks a range of edge use cases that fulfill the stringent requirements of critical infrastructure for our healthcare, retail, manufacturing, government, defense, and intelligence customers. This may include real-time video analytics for public safety, predictive maintenance in industrial settings, rapid medical diagnostics, and secure, low-latency inferencing for essential services such as energy production and critical infrastructure. The NVIDIA RTX PRO 6000 Blackwell enables improved virtual desktop support by leveraging NVIDIA vGPU technology and Multi-Instance GPU (MIG) capabilities. This can not only accommodate a higher user density, but also power AI-enhanced graphics and visual compute capabilities, offering an efficient solution for demanding virtual environments.

Earlier this year, Microsoft announced a multitude of AI capabilities at the edge, all enriched with NVIDIA accelerated computing:

  • Edge Retrieval Augmented Generation (RAG): Empower sovereign AI deployments with fast, secure, and scalable inferencing on local data—supporting mission-critical use cases across government, healthcare, and industrial automation.
  • Azure AI Video Indexer enabled by Azure Arc: Enables real-time and recorded video analytics in disconnected environments—ideal for public safety and critical infrastructure monitoring or post-event analysis.

With Azure Local, customers can meet strict regulatory, data residency, and privacy requirements while harnessing the latest AI innovations powered by NVIDIA.

Whether you need ultra-low latency for business continuity, robust local inferencing, or compliance with industry regulations, we’re dedicated to delivering cutting-edge AI performance wherever your data resides. Customers now access the breakthrough performance of the NVIDIA RTX PRO 6000 Blackwell GPUs in new Azure Local solutions—including Dell AX-770, HPE ProLiant DL380 Gen12, and Lenovo ThinkAgile MX650a V4.

To find out more about upcoming availability and sign up for early ordering, visit: 

Powering the future of AI with new models on Azure AI Foundry

At Microsoft, we’re committed to bringing the most advanced AI capabilities to our customers, wherever they need them. Through our partnership with NVIDIA, Azure AI Foundry now brings world-class multimodal reasoning models directly to enterprises, deployable anywhere as secure, scalable NVIDIA NIM™ microservices. The portfolio spans a range of different use cases:

NVIDIA Nemotron Family: High accuracy open models and datasets for agentic AI

  • Llama Nemotron Nano VL 8B is available now and is tailored for multimodal vision-language tasks, document intelligence and understanding, and mobile and edge AI agents. 
  • NVIDIA Nemotron Nano 9B is available now and supports enterprise agents, scientific reasoning, advanced math, and coding for software engineering and tool calling. 
  • NVIDIA Llama 3.3 Nemotron Super 49B 1.5 is coming soon and is designed for enterprise agents, scientific reasoning, advanced math, and coding for software engineering and tool calling.

NVIDIA Cosmos Family: Open world foundation models for physical AI

  • Cosmos Reason-1 7B is available now and supports robotics planning and decision making, training data curation and annotation for autonomous vehicles, and video analytics AI agents extracting insights and performing root-cause analysis from video data.
  • NVIDIA Cosmos Predict 2.5 is coming soon and is a generalist model for world state generation and prediction. 
  • NVIDIA Cosmos Transfer 2.5 is coming soon and is designed for structural conditioning and physical AI.

Microsoft TRELLIS by Microsoft Research: High-quality 3D asset generation 

  • Microsoft TRELLIS by Microsoft Research is available now and enables digital twins by generating accurate 3D assets from simple prompts, immersive retail experiences with photorealistic product models for AR and virtual try-ons, and game and simulation development by turning creative ideas into production-ready 3D content.

Together, these open models reflect the depth of the Azure and NVIDIA partnership: combining Microsoft’s adaptive cloud with NVIDIA’s leadership in accelerated computing to power the next generation of agentic AI for every industry. Learn more about the models here.

Maximizing GPU utilization for enterprise AI with NVIDIA Run:ai on Azure

As an AI workload and GPU orchestration platform, NVIDIA Run:ai helps organizations make the most of their compute investments, accelerating AI development cycles and driving faster time-to-market for new insights and capabilities. By bringing NVIDIA Run:ai to Azure, we’re giving enterprises the ability to dynamically allocate, share, and manage GPU resources across teams and workloads, helping them get more from every GPU.

NVIDIA Run:ai on Azure integrates seamlessly with core Azure services, including Azure NC and ND series instances, Azure Kubernetes Service (AKS), and Azure Identity Management, and offers compatibility with Azure Machine Learning and Azure AI Foundry for unified, enterprise-ready AI orchestration. We’re bringing hybrid scale to life to help customers transform static infrastructure into a flexible, shared resource for AI innovation.

With smarter orchestration and cloud-ready GPU pooling, teams can drive faster innovation, reduce costs, and unleash the power of AI across their organizations with confidence. NVIDIA Run:ai on Azure enhances AKS with GPU-aware scheduling, helping teams allocate, share, and prioritize GPU resources more efficiently. Operations are streamlined with one-click job submission, automated queueing, and built in governance. This ensures teams spend less time managing infrastructure and more time focused on building what’s next. 

This impact spans industries, supporting the infrastructure and orchestration behind transformative AI workloads at every stage of enterprise growth: 

  • Healthcare organizations can use NVIDIA Run:ai on Azure to advance medical imaging analysis and drug discovery workloads across hybrid environments. 
  • Financial services organizations can orchestrate and scale GPU clusters for complex risk simulations and fraud detection models. 
  • Manufacturers can accelerate computer vision training models for improved quality control and predictive maintenance in their factories. 
  • Retail companies can power real-time recommendation systems for more personalized experiences through efficient GPU allocation and scaling, ultimately better serving their customers.

Powered by Microsoft Azure and NVIDIA, Run:ai is purpose-built for scale, helping enterprises move from isolated AI experimentation to production-grade innovation.

const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”https:\/\/cdn-dynmedia-1.microsoft.com\/is\/image\/microsoftcorp\/1089732-ActualizeNVIDIA-RUN-AI-AZURE?wid=1280″,”title”:””,”sources”:[{“src”:”https:\/\/cdn-dynmedia-1.microsoft.com\/is\/content\/microsoftcorp\/1089732-ActualizeNVIDIA-RUN-AI-AZURE-0x1080-6439k”,”type”:”video\/mp4″,”quality”:”HQ”},{“src”:”https:\/\/cdn-dynmedia-1.microsoft.com\/is\/content\/microsoftcorp\/1089732-ActualizeNVIDIA-RUN-AI-AZURE-0x720-3266k”,”type”:”video\/mp4″,”quality”:”HD”},{“src”:”https:\/\/cdn-dynmedia-1.microsoft.com\/is\/content\/microsoftcorp\/1089732-ActualizeNVIDIA-RUN-AI-AZURE-0x540-2160k”,”type”:”video\/mp4″,”quality”:”SD”},{“src”:”https:\/\/cdn-dynmedia-1.microsoft.com\/is\/content\/microsoftcorp\/1089732-ActualizeNVIDIA-RUN-AI-AZURE-0x360-958k”,”type”:”video\/mp4″,”quality”:”LO”}],”ccFiles”:[{“url”:”https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/msxcm\/v1\/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Fbade%2Fvideos%2Fproducts-and-services%2Fen-us%2Fazure%2F1089732-actualizenvidia-run-ai-azure%2F1089732-ActualizeNVIDIA-RUN-AI-AZURE_cc_en-us.ttml”,”locale”:”en-us”,”ccType”:”TTML”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-690b8cbc4da1a”, options);
});

Reimagining AI at scale: First to deploy NVIDIA GB300 NVL72 supercomputing cluster

Microsoft is redefining AI infrastructure with the new NDv6 GB300 VM series, delivering the first at-scale production cluster of NVIDIA GB300 NVL72 systems, featuring over 4600 NVIDIA Blackwell Ultra GPUs connected via NVIDIA Quantum-X800 InfiniBand networking. Each NVIDIA GB300 NVL72 rack integrates 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace™ CPUs, delivering over 130 TB/s of NVLink bandwidth and up to 136 kW of compute power in a single cabinet. Designed for the most demanding workloads—reasoning models, agentic systems, and multimodal AI—GB300 NVL72 combines ultra-dense compute, direct liquid cooling, and smart rack-scale management to deliver breakthrough efficiency and performance within a standard datacenter footprint. 

Azure’s co-engineered infrastructure enhances GB300 NVL72 with technologies like Azure Boost for accelerated I/O and integrated hardware security modules (HSM) for enterprise-grade protection. Each rack arrives pre-integrated and self-managed, enabling rapid, repeatable deployment across Azure’s global fleet. As the first cloud provider to deploy NVIDIA GB300 NVL72 at scale, Microsoft is setting a new standard for AI supercomputing—empowering organizations to train and deploy frontier models faster, more efficiently, and more securely than ever before. Together, Azure and NVIDIA are powering the future of AI. 

Learn more about Microsoft’s systems approach in delivering GB300 NVL72 on Azure.

Unleashing the performance of ND GB200-v6 VMs with NVIDIA Dynamo 

Our collaboration with NVIDIA focuses on optimizing every layer of the computing stack to help customers maximize the value of their existing AI infrastructure investments. 

To deliver high-performance inference for compute-intensive reasoning models at scale, we’re bringing together a solution that combines the open-source NVIDIA Dynamo framework, our ND GB200-v6 VMs with NVIDIA GB200 NVL72 and Azure Kubernetes Service(AKS). We’ve demonstrated the performance this combined solution delivers at scale with the gpt-oss 120b model processing 1.2 million tokens per second deployed in a production-ready, managed AKS cluster and have published a deployment guide for developers to get started today. 

Dynamo is an open-source, distributed inference framework designed for multi-node environments and rack-scale accelerated compute architectures. By enabling disaggregated serving, LLM-aware routing and KV caching, Dynamo significantly boosts performance for reasoning models on Blackwell, unlocking up to 15x more throughput compared to the prior Hopper generation, opening new revenue opportunities for AI service providers. 

These efforts enable AKS production customers to take full advantage of NVIDIA Dynamo’s  inference optimizations when deploying frontier reasoning models at scale. We’re dedicated to bringing the latest open-source software innovations to our customers, helping them fully realize the potential of the NVIDIA Blackwell platform on Azure. 

Learn more about Dynamo on AKS.

Get more AI resources

The post Building the future together: Microsoft and NVIDIA announce AI advancements at GTC DC appeared first on Microsoft AI Blogs.

]]>
Microsoft’s commitment to supporting cloud infrastructure demand in Asia https://azure.microsoft.com/en-us/blog/microsofts-commitment-to-supporting-cloud-infrastructure-demand-in-asia/ Thu, 09 Oct 2025 15:00:00 +0000 Microsoft is expanding its cloud infrastructure across Asia to help organizations scale with secure, AI-ready services. Learn more.

The post Microsoft’s commitment to supporting cloud infrastructure demand in Asia appeared first on Microsoft AI Blogs.

]]>
Microsoft supports cloud infrastructure demand in Asia

As Asia surges ahead in digital transformation, Microsoft is committed to expanding its cloud infrastructure to match the continent’s demand. In 2025, Microsoft launched new Azure datacenter regions in Malaysia and Indonesia, and is set to expand further with new datacenter regions launching in India and Taiwan in 2026. Microsoft is also announcing our intent to deliver a second datacenter region in Malaysia, called Southeast Asia 3. Across Asian markets, the company is investing billions to expand its AI infrastructure footprint—bringing cutting-edge AI, next-generation networking, and scalable storage to the world’s most populus area. These investments will empower enterprises across Asia to scale seamlessly, unlock the full value of their data, and capture new opportunities for growth.

Skyline view of a modern city with tall buildings.

Microsoft’s global infrastructure spans over 70 datacenter regions across 33 countries—more than any other cloud provider—designed to meet data residency, compliance, and performance. In Asia, where businesses across financial services, public sector, manufacturing, retail, and start-ups are deeply integrated into the global economy, Microsoft’s strategically distributed datacenters deliver seamless scalability, low-latency connectivity, and regulatory assurance. By keeping critical data and applications close on fault-tolerant, high-capacity networking infrastructure, organizations can operate confidently across local and international markets—delivering fast, reliable services that meet customer expectations and comply with legal requirements.

With a dozen datacenter regions already live across Asia, we are making significant datacenter region investments to expand across the continent. These investments will become some of our most integral datacenters in the region:

East Asia

East Asia, an historically established market in our Japan and Korea geographies, will see continued growth and expansion. In April 2025, Microsoft launched Azure Availability Zones in the Japan West region—enhancing resilience and efficiency as part of a two-year plan to invest in Japan’s AI and cloud infrastructure.

Overhead night view of a busy cityscape.

Additionally, Microsoft announced the launch of Microsoft 365 and associated data residency offerings for commercial customers in the Taiwan North cloud region. Azure services are also accessible to select customers in this region, with general availability for all customers expected in 2026.

Southeast Asia nations

Microsoft is also deepening its commitment in Southeast Asia countries through substantial investments, marked by the launch of new cloud regions in Indonesia and Malaysia in May 2025. The recently launched regions are designed with AI-ready hyperscale cloud infrastructure and three availability zones, providing organizations across Southeastern Asia with secure, low-latency access to cloud services.

The recently launched Indonesia Central region is a welcome addition to this area of the world. It offers comprehensive Azure services and local Microsoft 365 availability, unlocking new capabilities to allow customers to innovate. Our continued investments in Indonesia are expected to drive significant expansion, positioning this datacenter region to become one of the largest regions in Asia over the coming years. Today, more than 100 organizations are already using the Microsoft Cloud from Indonesia, to accelerate their transformation, including:

  • Binus University is leveraging Azure Machine Learning and Azure OpenAI Service to enhance both campus operations and student learning. AI enables accurate student intake forecasting and automates diploma supplement summaries for over 10,000 graduates annually, improving operational efficiency. On the academic side, BINUS is developing AI-powered tools like personalized AI Tutors, generative AI in libraries for tailored book recommendations, and the Beelingua platform for interactive language learning, all aimed at creating a more adaptive, inclusive, and future-ready educational experience.
  • GoTo Group integrates GitHub Copilot into its engineering workflow, aiming to boost productivity and innovation. Nearly a thousand engineers have adopted the AI-powered coding assistant, which offers real-time suggestions, chat-based help, and simplified explanations of complex code, significantly speeding up the time to innovate.
  • Customers such as Adaro, BCA, Binus University, Pertamina, Telkom Indonesia, and Manulife have joined the Indonesia Central cloud region, gaining on-premises access to Microsoft’s hyperscale infrastructure.
Urban skyline in Southeast Asia.

The Malaysia West datacenter region, our first cloud region in the country, helps empower Malaysia’s digital and AI transformation with access to Azure and Microsoft 365. A diverse group of organizations, enterprises, and startups are already leveraging the Malaysia West region including:

  • PETRONAS, Malaysia’s global energy and solutions provider, is partnering with Microsoft to leverage hyperscale cloud infrastructure to continue advancing its digital and AI transformation, as well as clean energy transition efforts in Asia.
  • Other customers using Microsoft’s new cloud region include FinHero, SCICOM Berhad, Senang, SIRIM Berhad, TNG Digital (the operator of TNG eWallet), and Veeam, along with more organizations expected to come onboard as demand for secure, scalable, and locally-hosted cloud services continues to grow across industries.

In Malaysia, Microsoft is expanding its digital infrastructure footprint further with a new datacenter region, Southeast Asia 3, planned in Johor Bahru. When this next-generation region comes online, it will feature Microsoft’s most comprehensive and strategic cloud services, designed to support advanced workloads and evolving customer needs from across the area.  

In addition to Indonesia and Malaysia, Microsoft also announced in 2024, a significant commitment to enable a cloud and AI-powered future for Thailand.

India sub-continent

The India geography already has several live datacenter regions, and this footprint will expand further with the launch of the Hyderabad-based India South Central datacenter region coming in 2026. This is a part of a US $3 billion investment over two years in India cloud and AI infrastructure.

Consider a multi-region approach

Microsoft’s goal is to empower you to build and grow your business with unparalleled performance and availability. One of the best ways to position your organization for growth is to consider how you choose the right Azure regions.

Our infrastructure investments in Asia are driven by the need for greater agility and flexibility in today’s dynamic cloud environment. Organizations can build a more resilient foundation by not locking themselves into a single region, all while optimizing performance. This enables access to Azure services, resources, and capacity across a broader set of geographic areas. A multi-region approach allows businesses to rapidly adapt to changing demands while maintaining high service levels. Our cloud infrastructure supports this agility by distributing services across regions, helping ensure responsiveness and scalability during peak usage. Leveraging a multi-region cloud architecture with any of our Asia-based regions further strengthens application performance, latency, and overall resilience and availability of cloud applications—empowering organizations to stay ahead in a fast-evolving digital landscape.

Opportunities for cost optimization

Pricing is a critical factor when selecting the right Azure regions for your organization. Through our significant investments in Asia, Microsoft is now able to offer newer and more cost-effective Azure regions, catering to both small and large organizations. Our newest regions like Indonesia Central, are designed to provide greater choice and flexibility, enabling businesses to optimize their cloud expenditures while maintaining high performance and availability.

Boost your cloud strategy

Use the Cloud Adoption Framework to achieve your cloud goals with best practices, documentation, and tools for business and technology strategies.

Use the Well Architected Framework to optimize workloads with guidance for building reliable, secure, and performant solutions on Azure.

By choosing to deploy services through any of our Azure regions, customers can leverage the diverse and robust infrastructure that Microsoft is developing across Asia. This approach not only offers resilience and flexibility but also paves the way for innovative solutions that drive economic growth and a more connected future.

The post Microsoft’s commitment to supporting cloud infrastructure demand in Asia appeared first on Microsoft AI Blogs.

]]>
FabCon Europe: Highlights from the European Microsoft Fabric Community Conference 2025 http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2025/09/29/fabcon-europe-highlights-from-the-european-microsoft-fabric-community-conference-2025/ Mon, 29 Sep 2025 19:00:00 +0000 FabCon Vienna 2025 highlighted innovations, partnerships, and customer success shaping the future of data and AI.

The post FabCon Europe: Highlights from the European Microsoft Fabric Community Conference 2025 appeared first on Microsoft AI Blogs.

]]>
FabCon Vienna 2025 has officially come to a close and has set a benchmark for our conferences going forward! Together we delivered four days full of energy, inspiration, and connections, making this our most impactful European FabCon yet.

As our second annual event, the European Microsoft Fabric Community Conference brought more than 4,000 attendees to a sold-out event in Vienna. The week opened with a partner pre-day, followed by an executive track that gave senior leaders exclusive access to Microsoft leadership. On the main stage, our keynotes set the tone with the latest Fabric announcements that drew coverage across InfoWorldSiliconANGLETechTargetThe RegisterVentureBeatThe Neuron, and Le Monde.

With 130+ sessions, 11 full-day workshops, a vibrant community lounge, and dozens of expert-led booths, FabCon Vienna gave customers and partners the chance to learn, connect, and see how Fabric is transforming organizations worldwide. The event highlighted innovations, partnerships, and customer success shaping the future of data and AI. Let’s take a look at some of the best moments.

Speakers and attendees at the 2025 Fabric Community Conference.

Day 1 keynote

The conference kicked off with a big welcome from Arun Ulag, Corporate Vice President of Azure Data as well as a video welcome from Satya Nadella, CEO of Microsoft. Arun and Satya highlighted the incredible customer momentum Fabric is experiencing, capped by appearances from Christian Meyers, Head of Platforms for Siemens AG and Gijs Thieme, Chief Data and Analytics Officer for KPN. This was followed by an action-packed segment led by Amir Netz, CTO of Microsoft Fabric where he and the product team unveiled the latest innovations coming to Microsoft Fabric.

Highlights included our new OneLake enhancements with Oracle and Google BigQuery mirroring (preview), Graph and Maps in Fabric (preview) to unlock connected and geospatial insights, and expanded developer tooling with the Fabric Extensibility ToolkitModel Context Protocol, and deeper Git/VS Code integration. Furthermore, we shared announcements related to the latest enterprise-grade advancements in Fabric security such as Azure Private Linkcustomer-managed keys, and Synapse migration capabilities, plus new partner solutions from ESRI, Lumel, and Neo4j.

For a deeper dive into these announcements, watch the full Microsoft Fabric keynote or read FabCon Vienna: Build data-rich agents on an enterprise-ready foundation.

Day 2 keynote

Day two expanded the spotlight to the Fabric ecosystem, with senior Microsoft leaders showing how Fabric unifies databases, governance, and AI to help organizations on their journey to become AI Frontier Firms. Jessica Hawk opened the session by introducing the Frontier Firm framework and why every organization should consider its roadmap to becoming one. Shireesh Thota followed with a vision for how databases powering modern AI apps must offer the deployment flexibility of both PaaS and SaaS to meet the exploding demand for data.

Kim Manis then demonstrated how Fabric’s deeply integrated governance, security, and compliance stack, from the data center to M365, enables organizations to confidently manage and secure data access. Closing out the keynote portion, Marco Casalaina showcased how Azure AI services deliver real-time capabilities like translations and how Azure AI Foundry integrates with Fabric to bring data-specific skills to customer AI agents and enable low-code agents built in Copilot Studio.

FabCon TV

We were thrilled to introduce FabCon TV to the programming for the first time at FabCon Vienna. This new platform brought expert-led content live to a studio audience, with all segments recorded to share with our global community in the months ahead. Many of the Fabric Tech Talk Fridays (F2T2) episodes were also filmed on the FabCon TV stage, giving the community a unique opportunity to experience their favorite weekly series live.

Whether you want to relive FabCon highlights or catch up on F2T2 deep dives, FabCon TV will be the hub for ongoing learning and inspiration.

A panel of speakers sit on stage for FabConTV.

FabCon Community Lounge

The Community Lounge was once again a hub of connection and learning, bringing together MVPs, Super Users, and User Group Leaders for meetups, Q&A sessions, and community-led discussions. Attendees enjoyed interactive experiences like the “Fast at Fabric” challenge, sticker scavenger hunts, and a collaborative coloring wall, all designed to spark participation while promoting skilling and certification opportunities.

Swag was earned through intentional activities such as joining user groups or subscribing to community blogs, making engagement both fun and rewarding. The lounge also hosted a Diversity & Inclusion lunch, highlighted Microsoft Learn certifications, and featured live MVP sessions and giveaways, reinforcing its role as a central space for skilling, networking, and community growth. Keep the momentum going by joining the Fabric Community.

To celebrate FabCon Vienna, we’re offering all community members a 50% discount on exams DP-600, DP-700, DP-900, and PL-300. Request your voucher before October 3!

Additional highlights from FabCon Vienna 2025

Beyond the keynotes and sessions, FabCon Vienna 2025 delivered memorable moments that blended innovation with community spirit, including:

  • Hands-on exploration with unique experiences like a high-octane racing simulator powered by Fabric Real-Time Intelligence and expert-staffed booths for direct Q&A.
  • Community competitions and creativity were highlighted by the DataViz World Championship, where data storytellers competed for the crown.
  • A milestone celebration marking the 10th anniversary of Microsoft Power BI, honoring a decade of impact and innovation in analytics.
  • Unparalleled connections with Microsoft product leaders, MVPs, partners, and peers, strengthening relationships across the global Fabric community.
  • An exclusive, sold-out executive track offered senior leaders direct access to Microsoft executives, curated sessions, and peer-to-peer learning, capped by a memorable evening at a historic Viennese palace.
  • Power Hour delivered fun and creativity with live Fabric and Azure demos, the crowd-favorite Fabric Family Feud (Engineering vs. Marketing), and limited-edition Lego Power BI birthday sets, cementing it as a must-see FabCon tradition.
A speaker stands on a stage in front of a crowd at the 2025 Fabric Community Conference.

Join us at FabCon Atlanta and Microsoft Ignite

Mark your calendars! The next FabCon is coming to my hometown in Atlanta, Georgia, from March 16 to 20, 2026. I’m thrilled to see the Fabric community come together here for even more in-depth sessions, cutting-edge demos and announcements, and the networking that makes FabCon so special. Register today and use code MSCATL for a $200 discount on top of current early access pricing!

In the meantime, join us at Microsoft Ignite 2025. From November 18 to 21, 2025, experience the latest innovations across Microsoft Fabric and the full Microsoft Cloud live from San Francisco, the iconic hub of technology and culture, or join us online. We look forward to seeing you there.

And don’t stop there—hack the future of data and AI with Fabric. Compete in the Microsoft Fabric FabCon Global Hackathon for your chance to win up to $10,000! Running through November 3.

Explore additional resources for Microsoft Fabric

The post FabCon Europe: Highlights from the European Microsoft Fabric Community Conference 2025 appeared first on Microsoft AI Blogs.

]]>
Scaling generative AI in the cloud: Enterprise use cases for driving secure innovation  https://azure.microsoft.com/en-us/blog/scaling-generative-ai-in-the-cloud-enterprise-use-cases-for-driving-secure-innovation/ Tue, 29 Jul 2025 15:00:00 +0000 In our technical guide, “Accelerating Generative AI Innovation with Cloud Migration” we outline how IT and digital transformation leaders can tap into the power and flexibility of Azure to unlock the full potential of generative AI.

The post Scaling generative AI in the cloud: Enterprise use cases for driving secure innovation  appeared first on Microsoft AI Blogs.

]]>
Generative AI was made for the cloud. Only when you bring AI and the cloud together can you unlock the full potential of AI for business. For organizations looking to level up their generative AI capabilities, the cloud provides the flexibility, scalability and tools needed to accelerate AI innovation. Migration clears the roadblocks that inhibit AI adoption, making it faster and easier to not only adopt AI, but to move from experimentation to driving real business value.

Whether you are interested in tapping into real-time insights, delivering hyper-personalized customer experiences, optimizing supply chains with predictive analytics, or streamlining strategic decision-making, AI is reshaping how companies operate. Organizations relying on legacy or on-premises infrastructure are approaching an inflection point. Migration is not just a technical upgrade, it is a business imperative for realizing generative AI at scale. Without the flexibility the cloud provides, companies face higher costs, slower innovation cycles, and limited access to the data that AI models need to deliver meaningful results. 

For IT and digital transformation leaders, choosing the right cloud platform is key to successfully deploying and managing AI. With best-in-class infrastructure, high-performance compute capabilities, enterprise-grade security, and advanced data integration tools, Azure offers a comprehensive cloud ecosystem that forward-thinking businesses can count on when bringing generative AI initiatives to bear. 

In our technical guide, “Accelerating Generative AI Innovation with Cloud Migration” we outline how IT and digital transformation leaders can tap into the power and flexibility of Azure to unlock the full potential of generative AI. Let us explore a few real-world business scenarios where generative AI in the cloud is driving tangible impact, helping companies move faster, innovate, and activate new ways of working.

Use case 1: Driving smarter, more adaptive AI solutions with real-time data

One of the biggest challenges in AI adoption? Disconnected or outdated data. Ensuring that AI models have access to the most current and relevant data is where Retrieval-augmented generation (RAG) shines. RAG makes generative AI more accurate and reliable by pulling in real-time, trusted data, reducing the chance of errors and hallucinations. 

How does deploying RAG impact businesses? 

Unlike traditional AI models that rely on historical data, RAG-powered AI is dynamic, staying up to date by pulling in the latest information from sources like SQL databases, APIs, and internal documents. This makes it more accurate in fast-changing environments. RAG models help teams: 

  • Automate live data retrieval, improving efficiency by reducing the need for manual updates. 
  • Make smarter, more informed decisions by granting access to the latest domain specific information. 
  • Boost accuracy and speed in interactive apps. 
  • Lower operational costs by reducing the need for human intervention. 
  • Tap into proprietary data to create differentiated outcomes and competitive advantages. 

Companies are turning to RAG models to generate more accurate, up-to-date insights by pulling in live data. This is especially valuable in fast-moving industries like finance, healthcare, and retail, where decisions rely on the latest market trends, access to sensitive data, regulatory updates, and personalized customer interactions. 

The Azure advantage:

Cloud-based RAG apps help businesses move beyond static AI by enabling more adaptive, intelligent solutions. When RAG runs in the cloud, enterprises can benefit from reduced latency, high-speed data transfers, built-in security controls, and simplified data governance. 

Azure’s cloud services, including Azure AI Search, Azure OpenAI Service, and Azure Machine Learning, provide the necessary tools to support responsive and secure RAG applications. Together, these services help businesses stay responsive in rapidly changing environments so they are ready for whatever comes next. 

Use case 2: Embedding generative AI into enterprise workflows

Enterprise systems like enterprise resource planning (ERP) software, customer relationship management (CRM), and content management platforms are the backbone of daily operations and crucial to the success of an organization. However, they often rely on repetitive tasks and manual oversight. By integrating generative AI directly into these workflows, businesses can streamline tasks, unlock faster insights, and deliver more personalized, contextually relevant recommendations, all within the existing systems that teams are already using.

What is the business impact of embedding generative AI into enterprise application workflows? 

With AI built into core business applications, teams can work smarter and faster. With embedded generative AI in enterprise apps, industry leaders can: 

  • Optimize their operations by analyzing supply chain data on the fly, flagging anomalies and recommending actionable insights and proactive adjustments. 
  • Enrich customer experiences with personalized recommendations and faster response times. 
  • Automate routine tasks like data entry, report generation, and content management to reduce manual effort and expedite workflows. 

For organizations running on-premises ERP and CRM systems, the ability to integrate AI presents a compelling reason to move to the cloud.

The Azure advantage:

With Azure, companies can bring GenAI into everyday business operations without disrupting them, gaining scalable compute power, secure data access, and modernization while maintaining operational continuity. Migrating these systems to the cloud also simplifies AI integration by eliminating silos and enabling secure, real-time access to business-critical data. Cloud migration lays the foundation for continuous innovation, allowing teams to quickly deploy updates, integrate new AI capabilities, and scale across the enterprise without disruption. 

  • Azure services like Azure OpenAI Service, Azure Logic Apps, and Azure API Management facilitate seamless integration, amplifying ERP and CRM systems with minimal disruption. 
  • Microsoft’s collaborations with platforms like SAP showcase how cloud-powered AI delivers current intelligence, streamlined operations, and advanced security—capabilities that are difficult to achieve with on-premises infrastructure. 

When generative AI is embedded into core applications, it goes beyond supporting operations. It transforms them.

Use case 3: Generative search for contextually aware responses

As enterprise data continues to grow, finding the right information at the right time has become a major challenge. Generative search transforms how organizations access and use information. With generative search, employees are empowered to make smarter decisions faster. As data volume grows, generative search helps cut through the noise by combining hybrid search with advanced AI models to deliver context-aware, tailored responses based on real-time data.

How can businesses use generative search to achieve real impact? 

With generative search, companies are better equipped to put their data to work. This approach is ideal knowledge discovery, customer support, and document retrieval, where the goal is to provide meaningful insights, summaries, or recommendations. With generative search, enterprises can: 

  • Improve customer support by delivering relevant, real-time responses based on customer data. 
  • Surface critical insights by quickly navigating unstructured and proprietary data. 
  • Summarize and extract key information from dense documents in less time. 

Across industries, generative search expands access to critical information, helping businesses move faster and smarter.

The Azure advantage:

Cloud-based generative search leverages the processing power and model options available in cloud environments.

  • Azure services like Azure AI Search, Azure OpenAI Service, and Azure Machine Learning enable productive integration of generative search into workflows, heightening context-aware search. Azure AI Search combines vector and keyword search to retrieve the most relevant data, while Azure OpenAI Service leverages models like GPT-4 to generate summaries and recommendations.
  • Azure Machine Learning ensures search outcomes remain precise through fine-tuning, and Azure Cognitive Search builds comprehensive indexes for improved retrieval.
  • Additional components, such as Azure Functions for dynamic model activation and Azure Monitor for performance tracking, further refine generative search capabilities, empowering organizations to harness AI-driven insights with confidence. 

Use case 4: Smart automation with generative AI agents 

There has been plenty of chatter around agentic AI this year, and for good reason. Unlike traditional chatbots, generative AI agents autonomously perform tasks to achieve specific goals, adapting to user interactions and continuously improving over time without needing explicit programming for every situation.

How can AI agents impact a business’s bottom line? 

By optimizing their actions for the best possible outcomes, AI agents help teams streamline workflows, respond to dynamic needs, and amplify overall effectiveness. With intelligent agents in place, companies can:

  • Automate repetitive, routine tasks, boosting efficiency and freeing teams to focus on higher-value workflows.
  • Cut operational costs, thanks to reduced manual effort and increased process efficiency.
  • Scale effortlessly, handling increased workloads without additional headcount. 
  • Improve service delivery by enabling consistent and personalized customer experiences. 

As demand rises, they scale effortlessly, enabling businesses to manage higher workloads without additional resources. This adaptability is especially valuable in industries with rapidly fluctuating customer demands, including e-commerce, financial services, manufacturing, communications, professional services, and healthcare.

The Azure advantage:

Cloud-based generative AI enables agents to access and process complex, distributed data sources in real time, sharpening their adaptability and accuracy. Microsoft Azure provides a comprehensive suite of tools to deploy and manage generative AI agents successfully: 

  • Azure AI Foundry Agent Service simplifies the enablement of agents capable of automating complex business processes from development to deployment. 
  • Azure OpenAI Service powers content generation and data analysis, while Azure Machine Learning enables fine-tuning and predictive analytics. 
  • Azure Cognitive Services polishes natural language understanding and Azure Databricks facilitates scalable AI model development.
  • For capable deployment and monitoring, Azure Kubernetes Service (AKS) streamlines containerized workloads, while Azure Monitor tracks live performance, ensuring AI agents operate optimally.

With these capabilities, Azure equips enterprises to harness the full potential of generative AI automation. 

The Azure advantage for generative AI innovation

Migrating to the cloud isn’t just a technical upgrade, it’s a strategic move for companies that want to lead in 2025 and beyond. By partnering with Azure, organizations can seamlessly connect AI models to critical data sources, applications, and workflows, integrating generative AI to drive tangible business outcomes. Azure’s infrastructure gives IT teams the tools to move fast and stay secure at scale. By shifting to a cloud-enabled AI environment, companies are positioning themselves to fully harness the power of AI and thrive in the era of intelligent automation. 

The post Scaling generative AI in the cloud: Enterprise use cases for driving secure innovation  appeared first on Microsoft AI Blogs.

]]>
Maximize your ROI for Azure OpenAI https://azure.microsoft.com/en-us/blog/maximize-your-roi-for-azure-openai/ Wed, 18 Jun 2025 15:00:00 +0000 This blog breaks down the available pricing and deployment options, and tools that support scalable, cost-conscious AI deployments.

The post Maximize your ROI for Azure OpenAI appeared first on Microsoft AI Blogs.

]]>
When you’re building with AI, every decision counts—especially when it comes to cost. Whether you’re just getting started or scaling enterprise-grade applications, the last thing you want is unpredictable pricing or rigid infrastructure slowing you down. Azure OpenAI is designed with that in mind: flexible enough for early experiments, powerful enough for global deployments, and priced to match how you actually use it.

From startups to the Fortune 500, more than 60,000 customers are choosing Azure AI Foundry, not just for access to foundational and reasoning models—but because it meets them where they are, with deployment options and pricing models that align to real business needs. This is about more than just AI—it’s about making innovation sustainable, scalable, and accessible.

This blog breaks down the available pricing and deployment options, and tools that support scalable, cost-conscious AI deployments.

Flexible pricing models that match your needs

Azure OpenAI supports three distinct pricing models designed to meet different workload profiles and business requirements:

  • Standard—For bursty or variable workloads where you want to pay only for what you use.
  • Provisioned—For high-throughput, performance-sensitive applications that require consistent throughput.
  • Batch—For large-scale jobs that can be processed asynchronously at a discounted rate.

Each approach is designed to scale with you—whether you’re validating a use case or deploying across business units.

A chart with colorful text

Standard

The Standard deployment model is ideal for teams that want flexibility. You’re charged per API call based on tokens consumed, which helps optimize budgets during periods of lower usage.

Best for: Development, prototyping, or production workloads with variable demand.

You can choose between:

  • Global deployments: To ensure optimal latency across geographies.
  • OpenAI Data Zones: For more flexibility and control over data privacy and residency.

With all deployment selections, data is stored at rest within the Azure chosen region of your resource.

Batch

  • The Batch model is designed for high-efficiency, large-scale inference. Jobs are submitted and processed asynchronously, with responses returned within 24 hours—at up to 50% less than Global Standard pricing. Batch also features large scale workload support to process bulk requests with lower costs. Scale your massive batch queries with minimal friction and efficiently handle large-scale workloads to reduce processing time, with 24-hour target turnaround, at up to 50% less cost than global standard.

Best for: Large-volume tasks with flexible latency needs.

Typical use cases include:

  • Large-scale data processing and content generation.
  • Data transformation pipelines.
  • Model evaluation across extensive datasets.

Customer in action: Ontada

Ontada, a McKesson company, used the Batch API to transform over 150 million oncology documents into structured insights. Applying LLMs across 39 cancer types, they unlocked 70% of previously inaccessible data and cut document processing time by 75%. Learn more in the Ontada case study.

Provisioned

The Provisioned model provides dedicated throughput via Provisioned Throughput Units (PTUs). This enables stable latency and high throughput—ideal for production use cases requiring real-time performance or processing at scale. Commitments can be hourly, monthly, or yearly with corresponding discounts.

Best for: Enterprise workloads with predictable demand and the need for consistent performance.

Common use cases:

  • High-volume retrieval and document processing scenarios.
  • Call center operations with predictable traffic hours.
  • Retail assistant with consistently high throughput.

Customers in action: Visier and UBS

  • Visier built “Vee,” a generative AI assistant that serves up to 150,000 users per hour. By using PTUs, Visier improved response times by three times compared to pay-as-you-go models and reduced compute costs at scale. Read the case study.
  • UBS created ‘UBS Red’, a secure AI platform supporting 30,000 employees across regions. PTUs allowed the bank to deliver reliable performance with region-specific deployments across Switzerland, Hong Kong, and Singapore. Read the case study.

Deployment types for standard and provisioned

To meet growing requirements for control, compliance, and cost optimization, Azure OpenAI supports multiple deployment types:

  • Global: Most cost-effective, routes requests through the global Azure infrastructure, with data residency at rest.
  • Regional: Keeps data processing in a specific Azure region (28 available today), with data residency both at rest and processing in the selected region.
  • Data Zones: Offers a middle ground—processing remains within geographic zones (E.U. or U.S.) for added compliance without full regional cost overhead.

Global and Data Zone deployments are available across Standard, Provisioned, and Batch models.

A diagram of a company

Dynamic features help you cut costs while optimizing performance

Several dynamic new features designed to help you get the best results for lower costs are now available.

  • Model router for Azure AI Foundry: A deployable AI chat model that automatically selects the best underlying chat model to respond to a given prompt. Perfect for diverse use cases, model router delivers high performance while saving on compute costs where possible, all packaged as a single model deployment.
  • Batch large scale workload support: Processes bulk requests with lower costs. Efficiently handle large-scale workloads to reduce processing time, with 24-hour target turnaround, at 50% less cost than global standard.
  • Provisioned throughput dynamic spillover: Provides seamless overflowing for your high-performing applications on provisioned deployments. Manage traffic bursts without service disruption.
  • Prompt caching: Built-in optimization for repeatable prompt patterns. It accelerates response times, scales throughput, and helps cut token costs significantly.
  • Azure OpenAI monitoring dashboard: Continuously track performance, usage, and reliability across your deployments.

To learn more about these features and how to leverage the latest innovations in Azure AI Foundry models, watch this session from Build 2025 on optimizing Gen AI applications at scale.

Integrated Cost Management tools

Beyond pricing and deployment flexibility, Azure OpenAI integrates with Microsoft Cost Management tools to give teams visibility and control over their AI spend.

Capabilities include:

  • Real-time cost analysis.
  • Budget creation and alerts.
  • Support for multi-cloud environments.
  • Cost allocation and chargeback by team, project, or department.

These tools help finance and engineering teams stay aligned—making it easier to understand usage trends, track optimizations, and avoid surprises.

Built-in integration with the Azure ecosystem

Azure OpenAI is part of a larger ecosystem that includes:

This integration simplifies the end-to-end lifecycle of building, customizing, and managing AI solutions. You don’t have to stitch together separate platforms—and that means faster time-to-value and fewer operational headaches.

A trusted foundation for enterprise AI

Microsoft is committed to enabling AI that is secure, private, and safe. That commitment shows up not just in policy, but in product:

  • Secure future initiative: A comprehensive security-by-design approach.
  • Responsible AI principles: Applied across tools, documentation, and deployment workflows.
  • Enterprise-grade compliance: Covering data residency, access controls, and auditing.

Get started with Azure AI Foundry

The post Maximize your ROI for Azure OpenAI appeared first on Microsoft AI Blogs.

]]>
The art of simplifying the complex: Microsoft Fabric’s superpower http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2025/02/24/the-art-of-simplifying-the-complex-microsoft-fabrics-superpower/ Mon, 24 Feb 2025 16:00:00 +0000 The art of simplifying the complex involves distilling intricate ideas, processes, and systems into their essential elements to create a unified experience accessible to a broader audience.

The post The art of simplifying the complex: Microsoft Fabric’s superpower appeared first on Microsoft AI Blogs.

]]>
The art of simplifying the complex involves distilling intricate ideas, processes, and systems into their essential elements to create a unified experience accessible to a broader audience. When done correctly, it does not reduce capability but rather enables innovation.

Microsoft Fabric has embraced this mission by integrating multiple products and services needed for an end-to-end analytics and AI solution, redefining existing processes to make them simpler and more intuitive. It has significantly simplified how one interfaces with such a comprehensive solution by creating a turnkey software-as-a-service experience that is easy to use with a much simpler and singular capacity usage model.

At Ignite 2024, Fabric took another bold step forward by adding operational databases to the Fabric portfolio with SQL database in Fabric. Adding operational data alongside Fabric’s analytical OLAP (Online Analytics Processing) data and real-time streaming data (RTI) opens a host of new scenarios for AI agentic applications. It also provides our customers with a unified data estate where consistent security and governance policies can be applied.

SQL database in Fabric leverages the proven mission-critical SQL Server database engine. It applies the core principles of Fabric to make deploying and managing an operational database simpler, more autonomous, secure by default, and optimized for AI. For example, deploying and configuring a database only requires a name, and the database is ready in seconds. It is secure by default with encryption at rest and in transit enabled. Networking security is also enabled via Private Link, and high availability and zone redundancy are automatically configured. 

SQL in Fabric includes native AI capabilities like support for vector and RAG (Retrieval-augmented Generation). You can also make calls directly to Azure AI services from the database and connect your database to Azure AI Foundry, VSCode, and GitHub for an integrated developer experience. In addition, you will find Microsoft Copilot integrated into every workload in Fabric including SQL in Fabric, simplifying administrative and management tasks for the databases. 

Beyond just the OLTP (Online Transaction Processing) database, Fabric introduces new agentic AI application scenarios by providing access to real-time streaming data from IoT sensors, alongside your system of record with SQL in Fabric and other data sets securely stored in OneLake. 

OneLake is at the heart of enabling a unified data estate. OneLake is built on top of Azure Data Lake Storage (ADLS) Gen2 and can support any type of file, structured or unstructured. All Fabric data items like data warehouses and lakehouses store their data automatically in OneLake in Delta Parquet format. With the addition of SQL database in Fabric you now also have access to your mirrored SQL data in OneLake and mirroring data in Fabric is free. 

Fabric also provides a rich ecosystem to support agentic AI applications using your operational data. Changes from your data can be seamlessly sent to Azure OpenAI for business recommendations using Fabric Real-time Intelligence Eventstream, Spark, OneLake, and Power BI. 

This unification of data is incredibly powerful, enabling dynamic improvements to customer prompt responses and proactive, personalized offers. From a security standpoint, Fabric can enable consistent data protection from when the data is born to business insights via PowerBI. The same goes for data governance. 

This is just the tip of the iceberg when it comes to the number of new scenarios that Fabric can enable by creating a unified data estate. SQL database in Fabric is just the first Azure Database to be added to Fabric, with more Azure Databases to follow, so stay tuned.

Get started today

SQL database in Fabric is simple, autonomous, secure, and optimized for AI. We highly encourage you to try it today and see how you can build new AI apps faster and easier than ever! 

Learning with Fabric

We have multiple resources to help you and your teams swiftly ramp up on SQL database in Fabric: 

Fabric Community Conference Vegas: A must-attend event for database professionals! Be sure to take advantage of the discount code MSCUST for $150 off the registration price. 

FabCon Vegas is the perfect opportunity to connect with experts and data leaders to build your skills with Fabric Databases and Azure Databases and see how your peers are implementing their solutions. 

  • Microsoft Fabric Community Conference March 31st – April 2nd, in Vegas! Workshops will also be available on March 29th, 30th, and April 3rd, making this the most comprehensive Microsoft Fabric learning experience to date.
  • SQL pros can take advantage of a dedicated track for SQL in Fabric Databases and Azure Databases. 
  • Connect with product specialists for 1:1 support in the Ask the Experts area. 
  • You’ll get endless opportunities all week to engage with the Fabric and data communities through sessions, thoughtful discussions, attendee mixers, and interactive activations. 
  • In touch with your Microsoft account team? Ask them if they have any special discounts to share.
A group of colorful circles

Database experts at FabCon

  • CVP of Azure Databases: Shireesh Thota, speaking at the event1.
  • Sessions from the Microsoft Databases Product team: Rie Merritt, Bob Ward, Mazuma Zahid, Erin Stellato, Davide Mauri, and more.
  • Sessions from Database Community MVPs: Joey D’Antoni, John Morehouse, Monica Rathbun, Denny Cherry, Karen Lopez, Anthony Nocentino, Erwin de Kreuk, Warwick Rudd, Kelly Broekstra, Heidi Hasting, and Hamish Watson.
A purple and white gradient

Microsoft Fabric

Unify your teams and data to accelerate AI innovation with a complete data platform


1speakers subject to change

The post The art of simplifying the complex: Microsoft Fabric’s superpower appeared first on Microsoft AI Blogs.

]]>
Power your AI transformation with Microsoft Fabric skilling plans and a certification discount http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2025/02/20/power-your-ai-transformation-with-microsoft-fabric-skilling-plans-and-a-certification-discount/ Thu, 20 Feb 2025 16:00:00 +0000 As we continue enhancing Fabric's capabilities, we are pleased to share several significant new skilling opportunities to help further empower your data analytics journey.

The post Power your AI transformation with Microsoft Fabric skilling plans and a certification discount appeared first on Microsoft AI Blogs.

]]>
Looking for a competitive edge in the era of AI and today’s data-powered business world? Microsoft Fabric transforms data into actionable intelligence, empowering your organization to optimize operations, uncover growth, and mitigate risks with a unified data solution. As we continue enhancing Fabric’s capabilities, we are pleased to share several significant new skilling opportunities to help further empower your data analytics journey.

In this blog, we’ll lay out the latest and greatest of our curated Fabric skilling paths on Microsoft Learn to help your team drive transformative business outcomes. We’re also announcing a new certification exam available with a 50% discount! And of course, we’ll dive into our exciting upcoming in-person event, FabCon, where we’ll have even more surprises in store, plus a chance for you to connect with industry experts and the larger data-analysis community. Let’s get started!

Get certified as a Fabric Data Engineer

Learning Microsoft Fabric equips aspiring engineers with skills to streamline workflows, handle large-scale data processing, and integrate advanced AI tools. As a Fabric Data Engineer, you’ll have the chance to design and manage cutting-edge data solutions that move AI-powered insights. That’s why we’re thrilled to announce the general availability of our new certification for Fabric Data Engineers

By earning your Microsoft Certified: Fabric Data Engineer Associate certification, you’ll be equipped with an industry-recognized credential to set you apart in the growing field of data and AI. But you don’t have to do it alone. We’ve convened two live and on-demand series of expert-led walkthroughs to help you either get started with Fabric or build on your existing skills. Designed with Fabric Data Engineers in mind, these Microsoft Fabric Learn Together sessions (available in four time zones and three languages) are intended to give you the knowledge and confidence to ace your certification exam and take your data engineering career to the next level.

Want to explore the ins and outs of Fabric on your own time? We also have an official plan on Microsoft Learn featuring everything you’ll need to learn to pass the DP-700 Fabric Data Engineer Associate certification exam, including: 

  • Describe the core features and capabilities of lakehouses in Microsoft Fabric.
  • Use Apache Spark DataFrames to analyze and transform data.
  • Use Real-Time Intelligence to ingest, query, and process streams of data.
  • And much more! 

There’s more: For a limited time, you can get 50% off the cost of the DP-700 Fabric Data Engineer Associate certification exam. To be eligible, either attend one of the Learn Together sessions, complete the Plan on Microsoft Learn, or have previously passed the DP-203 exam. You have until March 31, 2025, to request the discount voucher, so get started fast-tracking your data engineering career today!

Join a community of Fabric users and experts at FabCon Las Vegas 

No matter your role or skill level, you can connect with other Fabric users and experts at the Fabric Community Conference from March 31-April 2, 2025, in Las Vegas. Join us at the MGM Grand for the ultimate Microsoft Fabric, Power BI, SQL, and AI event featuring over 200 sessions with speakers covering exciting new Fabric features and skilling opportunities. 

Connect one-on-one with community and product experts, including a dedicated partner pre-day, all-day Ask-the-Experts hours, a bustling expo hall, and plenty of after-hours social events. Workshops will also be available on March 29th, 30th, and April 3rd, making this the most comprehensive Microsoft Fabric learning experience to date. 

Don’t miss out! Register today to grab the early bird discount and use code MSCUST for $150 off registration. 

Build AI apps faster with SQL databases in Fabric 

Fabric’s capabilities and versatility are always expanding. We recently introduced a public preview of SQL databases to make building AI apps faster and easier than ever. SQL Database in Microsoft Fabric provides a unified, autonomous, and AI-optimized platform that accelerates app development by up to 71%, empowering businesses to innovate faster and gain a competitive edge in the AI era. 

To enhance your skills in working with SQL databases in Fabric, we’ve designed a new learning path called Implement operational databases in Microsoft Fabric. This course will guide you through the process of creating and managing SQL databases within the Fabric environment. You’ll also learn how to provision an SQL database, configure security settings, and perform essential database operations.

The course covers important topics such as data modeling, query optimization, and performance tuning specific to Fabric’s SQL capabilities. By completing this learning path, you’ll gain hands-on experience with Fabric’s SQL features and be better equipped to design and implement efficient database solutions.

You can also watch on-demand sessions of a recent SQL Database in Fabric Learn Together series to see how to build reliable, highly scalable applications where cloud authentication and encryption are secured by default. 

Unlock AI-ready insights and transform your data 

There’s always more to discover on Microsoft Learn, including a plan to help you harness AI and unify your intelligent data and analytics on the Fabric platform. With the Make your data AI-ready with Microsoft Fabric plan on Microsoft Learn, you’ll find out how to implement large-scale data engineering, build a lakehouse, and explore warehouse solutions.

This free, curated, and self-paced plan guides you through key learning milestones:

  • Ingesting data through shortcuts, mirroring, pipelines, and dataflows. 
  • Transforming data using dataflows, procedures, and notebooks. 
  • Storing processed data in the lakehouse and data warehouse for easy retrieval. 
  • Exposing data by creating reusable semantic models in Power BI, making transformed data accessible for analysis. 

Kick off your data and AI journey at Microsoft Learn 

If you’re looking to expand your Microsoft Fabric expertise and accelerate your professional development, we have everything you need:

  • Harness AI to unify your data and analytics with the official plan on Microsoft Learn: Make your data AI-ready with Microsoft Fabric.

Fabric Community Conference

Bigger and better than ever

Fun at Work Meetings

The post Power your AI transformation with Microsoft Fabric skilling plans and a certification discount appeared first on Microsoft AI Blogs.

]]>
Microsoft: A leader in the 2024 Gartner Magic Quadrant report http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2024/12/09/microsoft-a-leader-in-the-2024-gartner-magic-quadrant-report/ Mon, 09 Dec 2024 16:00:00 +0000 We are thrilled to announce that Microsoft has been named a Leader in the 2024 Gartner Magic Quadrant™ for Data Integration Tools for the fourth year in a row. We believe this recognition reflects our dedication to innovation, excellence, and delivering value to our customers in data integration.

The post Microsoft: A leader in the 2024 Gartner Magic Quadrant report appeared first on Microsoft AI Blogs.

]]>
We are thrilled to announce that Microsoft has been named a Leader in the 2024 Gartner Magic Quadrant™ for Data Integration Tools for the fourth year in a row. We believe this recognition reflects our dedication to innovation, excellence, and delivering value to our customers in data integration. 

Gartner MQ Table

A Leader in Data Integration 

We feel that Microsoft’s acknowledgment in the Gartner Magic Quadrant reflects our dedication to innovation and customer-centric solutions. This stems from our relentless drive to advance technology and address the ever-evolving needs of modern organizations.

Our vision for data integration is to deliver seamless, intuitive experiences that empower businesses to unlock the full potential of their data and achieve transformative results. This recognition reinforces our dedication to leading the evolution of data integration and delivering unparalleled value to our customers and partners worldwide.

shape, background pattern

Microsoft Fabric

Give your teams the AI-powered tools they need for any data project—including workloads tailored to your industry

Microsoft Fabric: Unified Data Platform for the Era of AI 

At the core of our data integration strategy is Microsoft Fabric. Built to navigate the complexities of modern data ecosystems, Microsoft Fabric provides an all-in-one, software-as-a-service (SaaS) platform with AI-powered services to handle any data project—all within a pre-integrated and optimized environment. It enables organizations to unlock their data’s full potential, drive innovation, and make smarter decisions. Features like Copilot and other generative AI tools introduce new ways to transform and analyze data, generate insights, and create visualizations and reports in Microsoft Fabric.

Microsoft OneLake: The heart of our Data Integration journey 

At the center of our Fabric is OneLake, the unified, open data lake that simplifies and accelerates data integration across diverse systems. OneLake, with the data integration capabilities of Fabric, is designed to help you simplify data management and reduce data duplication. OneLake’s open data format means you only need to load the data into the lake once and you can use the single copy across every Fabric workload and engine. It acts as the central hub, ensuring seamless connectivity, accessibility, and collaboration for all your data needs. 

OneLake has four innovative pathways for integrating data depending on your needs: 

  1. Fabric Data Factory 

Fabric Data Factory integrates seamlessly with OneLake, offering powerful cloud-scale services for data movement, orchestration, transformation, deployment, and monitoring. These capabilities enable organizations to tackle even the most complex ETL (Extract, Transform, and Load) scenarios, unifying data estates, streamlining operations, and unlocking the full potential of their data.

  1. Multi-Cloud Shortcuts

OneLake shortcuts allow you to virtualize data into OneLake from across clouds, accounts, and domains—all without duplication, movement, or changes to metadata or ownership. This capability allows organizations to access and analyze their data in place, without the need for complex data migration processes. By maintaining a live connection to the source, OneLake ensures real-time data availability and consistency across all integrated environments. You can shortcut data from Azure Data Lake Service, S3-compatible sources, Iceberg-compatible sources, Google Cloud Platform, Dataverse, and more.

  1. Database Mirroring 

OneLake offers an innovative zero-ETL approach to database mirroring, simplifying the replication of operational databases into the lake. This capability minimizes the effort required to synchronize databases, supporting real-time changes and ensuring that data is always current and ready for analytics and reporting.

  1. Real-Time Intelligence 

Real-time intelligence in Microsoft Fabric empowers organizations to ingest and process streaming and high granularity data instantaneously, driving real-time insights and automating decision-making. This solution is ideal for applications requiring immediate data updates, such as IoT analytics, fraud detection, and operational dashboards. The capability extends to highly granular data analytics, allowing businesses to track a single package within a global delivery network or monitor a specific component in a manufacturing machine across a fleet of factories worldwide, enabling precise insights and optimized operations. Leveraging cutting-edge data processing frameworks, Eventhouse ensures scalability, reliability, and low-latency performance, making it suitable for high-volume streaming scenarios.

With these innovative pathways, Fabric empowers organizations to break down data silos, optimize workflows, and unlock the full potential of their data. Whether it’s through seamless data integration, real-time insights, or multi-cloud collaboration, Fabric is designed to meet the demands of modern data ecosystems. These enriched features position Fabric as a critical tool for organizations aiming to unlock the full potential of their data while maintaining simplicity, security, and scalability.

Customer success stories 

Our customers’ success stories are a testament to the impact of Microsoft Fabric. Organizations across various industries have leveraged our data integration capabilities to unlock new opportunities, drive innovation, and achieve their business goals. By streamlining data processes and improving data quality, Microsoft Fabric has enabled these businesses to make data-driven decisions with confidence. 

Read UST Global’s case study to learn how they leveraged the power of Fabric to migrate over 20 years of data, integrating disparate data sources to facilitate better collaboration and innovation among employees. 

Looking ahead: The future of Data Integration with Microsoft Fabric 

As we celebrate being recognized as a Leader in the Gartner Magic Quadrant for the fourth consecutive year in a row, we are motivated to push the boundaries of what’s possible in data integration. To us, this is a milestone that reflects not only our commitment to innovation but also our dedication to empowering our customers to turn their data into actionable insights.

Looking forward, the roadmap for Microsoft Fabric is filled with exciting enhancements and new features. These advancements are designed to tackle the complexities of modern data ecosystems, making it even easier for organizations to unify, transform, and harness their data at scale. Continuous improvement is at the core of our strategy. We aim to remain at the forefront of the data integration landscape and redefine the possibilities of what a comprehensive data platform can achieve. 

We believe this recognition by Gartner is a validation of the trust our customers place in us and a reflection of our relentless drive to deliver world-class solutions. As we continue this journey, we remain committed to collaborating with our community and partners, building on this success to achieve even greater outcomes together.

Resources 


Gartner, Magic Quadrant for Data Integration Tools, By Thornton Craig, Sharat Menon, Robert Thanaraj, Michele Launi, Nina Showell, 3 December 2024 

Gartner does not endorse any vendor, product, or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved. 

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. Available here.

The post Microsoft: A leader in the 2024 Gartner Magic Quadrant report appeared first on Microsoft AI Blogs.

]]>