Azure SQL Database - Microsoft SQL Server Blog http://approjects.co.za/?big=en-us/sql-server/blog/product/azure-sql-database/ Official News from Microsoft’s Information Platform Thu, 19 Mar 2026 23:31:04 +0000 en-US hourly 1 http://approjects.co.za/?big=en-us/sql-server/blog/wp-content/uploads/2018/08/cropped-cropped-microsoft_logo_element-150x150.png Azure SQL Database - Microsoft SQL Server Blog http://approjects.co.za/?big=en-us/sql-server/blog/product/azure-sql-database/ 32 32 FabCon and SQLCon 2026: Unifying databases and Fabric on a single data platform https://azure.microsoft.com/en-us/blog/fabcon-and-sqlcon-2026-unifying-databases-and-fabric-on-a-single-data-platform/ Wed, 18 Mar 2026 12:45:00 +0000 Welcome to the third annual FabCon and our first ever SQLCon here in Atlanta, Georgia. With nearly 300 workshops and sessions, this joint event will highlight how they are bringing the power of Microsoft SQL and Microsoft Fabric together to create a single, unified platform.

The post FabCon and SQLCon 2026: Unifying databases and Fabric on a single data platform appeared first on Microsoft SQL Server Blog.

]]>
Welcome to the third annual FabCon and our first ever SQLCon here in Atlanta, Georgia. With nearly 300 workshops and sessions, this joint event will highlight how they are bringing the power of Microsoft SQL and Microsoft Fabric together to create a single, unified platform. But FabCon 2026 and SQLCon 2026 are about more than product innovation. It’s about providing space for our 8,000 attendees to come together and share real experiences, learn from each other, and solve challenges side-by-side. Only together can we move beyond the hype and into meaningful results.

Learn more about FabCon and SQLCon 2026
The excitement surrounding this event reflects the same momentum we’re seeing across our data portfolio. Just two and a half years after Microsoft Fabric reached general availability, it’s already serving more than 31,000 customers and remains the fastest-growing data platform in Microsoft’s history. Fortune 500 companies like The Coca-Cola Company are already using Fabric at scale across their organizations.

Microsoft Fabric is helping us evolve our data foundation into a more unified, AI-ready platform. Combined with Power BI and capabilities like Fabric IQ, it enables the enterprise to turn data into intelligence and act on it faster.

Shekhar Gowda, Vice President of Global Marketing Technologies at The Coca-Cola Company
Our databases are accelerating just as quickly, with SQL Server 2025 growing more than twice as fast as the previous version.

Today, we’re thrilled to share how we are bringing the power of databases and Fabric together to form a truly converged data platform—one that unifies transactional, operational, and analytical data under a single, consistent architecture. I’ll also highlight how we’ve enhanced Fabric to help you transform data into the semantic knowledge AI needs to understand your business, powered by Fabric IQ and Power BI’s industry-leading semantic model technology.

Introducing the Database Hub in Microsoft Fabric
Databases sit at the heart of the enterprise data estate—a system of record powering applications, transactions, and mission‑critical insights. Yet as organizations scale across cloud, on‑premises, and edge environments, database estates have become increasingly fragmented and isolated. As AI places even greater demands on data estates, unifying databases under a single access point and control plane has become essential.

To address this challenge, Fabric is expanding its role as the central access point for enterprise data with the Database Hub in Fabric, now available in early access. Database Hub in Fabric provides a unified database management experience that brings together databases across edge, cloud, and Fabric into a single, coherent view. Teams now have one place to explore, observe, govern, and optimize their entire database estate—including Azure SQL, Azure Cosmos DB, Azure Database for PostgreSQL, SQL Server (enabled by Azure Arc), Azure Database for MySQL, and Fabric Databases—without changing how each service is deployed.

Built for scale, the Database Hub in Fabric introduces an agent‑assisted, human-in-the loop approach to database management. With built-in observability, delegated governance, and Microsoft Copilot-powered insights, teams can deploy intelligent agents to continuously reason over estate‑wide signals and surface what changed, explain why it matters, and guide teams toward what to do next. The result is a simpler, more confident way to manage databases at scale. Over time, this model enables database estates to become more proactive, resilient, and intelligent, laying the foundation for greater autonomy, while keeping humans firmly in control of goals, boundaries, and trust.

Learn more about Database Hub in Fabric and what’s new across Databases
Bringing databases together under a single management layer is a critical step as you prepare your estates for AI at scale. But it’s not the end of the journey. The challenge shifts from where data lives to how data is understood, connected, and activated across the enterprise.

Getting your data estate ready for AI with Fabric
As organizations move from traditional applications to AI‑powered, multi‑agent systems, the advantage is shifting away from the specific model you deploy. It now lies in the intelligence and context that allow agents to understand how your business is run, the state of your business, and your institutional knowledge to help take meaningful action.

This is the challenge Microsoft IQ is designed to address. Unlike point solutions on the market today, Microsoft IQ provides an intelligence layer that delivers shared, enterprise-grade business context to every agent. That context is built from three complementary sources: productivity signals from Work IQ, institutional knowledge from Foundry IQ, and live business data from Fabric IQ.

However, like the database layer, while the IQ context layer is a critical part of a successful, and healthy AI foundation, it is not the full story. Building a complete AI-ready data foundation requires investing in four core steps:

Unifying your data estate to eliminate silos and reduce architectural complexity.
Processing and harmonizing data so it becomes AI-ready, clean, connected, and structured for both operational and analytical use.
Curating semantic meaning to give agents contextual understanding, enabling them to interpret data the way your teams already do. This is where Microsoft IQ comes into play.
Empowering AI agents to act, applying that context to automate workflows, accelerate decisions, and transform operations end‑to‑end.
Unifying your data estate with Microsoft OneLake
Every AI initiative starts with the same fundamental challenge: understanding where your data lives and how to bring it together. Microsoft OneLake was built to solve that problem by unifying data across clouds, on-premises environments, and third-party platforms into a single logical data lake without unnecessary extracting, transforming, and loading (ETL), fragmentation, or duplicated copies.

Are my agents hunting for data?

Watch the podcast
Connecting to more sources than ever before
Today, we’re expanding Mirroring in Fabric to support even more systems our customers rely on. Mirroring for SharePoint lists and Dremio are now in preview with Azure Monitor coming soon, while mirroring for Oracle and SAP Datasphere are generally available—all of which are available as part of the core mirroring capabilities. We are also introducing extended capabilities in mirroring designed to help you operationalize mirrored sources at scale, including Change Data Feed (CDF) and the ability to create views on top of mirrored data, starting with Snowflake. Extended capabilities for mirroring will be offered as a paid option.

Shortcut transformations are also now generally available, allowing data to be shaped automatically as it connects to or moves within OneLake. You can convert formats such as Excel to Delta tables, now in preview, and apply AI-powered transformations.

Additionally, we are continuing to invest in open interoperability, ensuring OneLake works seamlessly with the platforms organizations already use. We are excited to announce the ability to natively read from OneLake through Azure Databricks Unity Catalog is now in public preview. We also recently announced the general availability of our interoperability with Snowflake.

I’m also excited to share that Auger, a rapidly growing supply chain platform designed to bring intelligence and automation to global operations, has built its platform on Fabric, with all data stored natively in OneLake. This architecture enables Auger customers to seamlessly access their operations data through OneLake shortcuts within their own Fabric environments and use the full power of the platform including Power BI, Fabric data agents, and more. Learn more in my blog, co-authored with Auger Chief Executive Officer Dave Clark.

Protect your data with OneLake security, now generally available
Security and governance remain foundational to OneLake. I’m thrilled to announce OneLake security will be generally available in the coming weeks, enabling data owners to define roles, enforce row- and column-level controls, and manage permissions through a single unified model that follows the data.

To learn more about these announcements, read the OneLake blog and the Fabric Data Factory blog.

Processing and harmonizing data with Fabric analytics
AI agents are only as reliable as the data you feed them. Before data can train or ground an agent, it must be integrated, cleaned, and structured, so the agent operates from consistent, trusted information. With industry-leading engines in Fabric like Spark, T-SQL, KQL, and Analysis Services, we can equip data teams to do exactly that.

Now, we are expanding these capabilities with the introduction of Runtime 2.0 in preview, purpose-built for large-scale data computation. It incorporates Apache Spark 4.x, Delta Lake 4.x, Scala 2.13, and Azure Linux Mariner 3.0 to power advanced enterprise workloads. Materialized lake views are also now generally available, simplifying medallion architecture implementation in Spark SQL and PySpark and enabling always up-to-date pipelines with no manual orchestration. In addition, a new agentic Copilot experience in notebooks delivers deeper context awareness, reasoning over your workspace, and generating code with greater speed and precision.

For real-time scenarios, we’re launching Microsoft Fabric Maps into general availability. Maps add geospatial context to your agents and operations by turning large volumes of location-based data into interactive, real-time visual insights.

For a comprehensive overview of these announcements and much more, read the Fabric Analytics announcement blog and the Fabric Real-Time Intelligence announcement blog.

Creating semantic meaning with Fabric IQ
Preparing raw data for AI is essential. The next step is transforming that data into meaningful, unified business context. That is where Fabric IQ comes in.

Fabric IQ unifies analytical data and operational data, including telemetry, time series, graph, and geospatial data, within a shared semantic framework of business entities, relationships, properties, rules, and actions. Instead of thinking in terms of tables and schemas, your teams and agents can operate on this framework, or ontology, aligned to how the business actually runs.

Fabric IQ ontologies will soon become accessible through an MCP server in preview, enabling agents to discover, understand, and act on this semantic layer. Ontologies can also serve as context sources for maps and soon in operations agents in Fabric, extending shared business context directly into operational decision-making and execution.

We are also excited to announce planning in Fabric IQ, a new enterprise planning capability that enables organizations to create plans, budgets, forecasts, and scenario models directly on top of Fabric’s semantic models. By complementing Fabric IQ’s ontologies with integrated planning, you get a complete, contextual view of your historical, real-time, and forward planning data. This allows users and agents to quickly answer what has happened, what is happening, and what should happen all from a single source. See this in action:

Finally, we recently announced a strategic partnership with NVIDIA to power the next generation of Physical AI by integrating Real-Time Intelligence and Fabric IQ with NVIDIA Omniverse libraries. The combined platform unifies real‑time operational data, business semantics, and physical simulation to enable organizations to optimize their physical operations in scenarios like intelligent digital twins, predictive maintenance, autonomous logistics, and energy optimization.

To learn more about all of our partner announcements, read the Fabric ISV announcement blog and the planning in Fabric IQ blog.

Enhancing the underlying Fabric IQ technology
Powering much of Fabric IQ’s rich experience is a combination of Power BI’s industry-leading, rich semantic model technology and graph in Fabric, our highly scalable graph database. Already delivering insights to more than 35 million active users, semantic models provide the ideal foundation for training agents through Fabric IQ. Now, with the general availability of Direct Lake on OneLake, your tables can be read directly from OneLake with native security enforcement, richer cross-item modeling, and import-class performance without data movement or refresh.

I’m also excited to share that graph in Fabric will be generally available in the coming weeks, enabling teams to visualize and query complex relationships across customers, partners, and supply chains.

To learn more, check out the Fabric IQ announcement blog and the Power BI announcement blog.

Empowering agents to act with Fabric data and operations agents
Frontier organizations are moving beyond general-purpose assistants and instead, adopting multi-agent systems composed of specialized agents. These agents are each grounded on specific data and reusable across different systems, allowing you to deliver more accurate, accelerated, and scalable outcomes.

To support your multi-agent systems, Fabric comes with built-in agent creation capabilities with Fabric data agents and operations agents. I’m excited to share that Fabric data agents are now generally available. Fabric data agents can be thought of as virtual analysts, aligned to specific domain data to support deeper analysis and deliver insights. Operations agents complement them by monitoring real-time data, detecting patterns, and taking proactive action.

Check out a quick demo of operations agents in Fabric:

These agents can be used across Fabric or as foundational knowledge sources in leading AI tools like Microsoft Foundry, Copilot Studio or even Microsoft 365 Copilot. To learn more about our AI announcements, check out the Fabric analytics blog covering data agents and the Fabric IQ blog covering operations agents.

Building mission-critical applications with developer experiences in Fabric
Developers building the next generation of AI applications need a comprehensive, cost-effective data platform that’s already integrated with your existing tools and workflows. Today, we are expanding Fabric’s developer tooling to meet that demand.

First, Fabric Model Context Protocol (MCP) is advancing with two major milestones. Fabric local MCP is now generally available, providing an open-source local server that connects AI coding assistants such as GitHub Copilot directly to Fabric. Alongside this, we’re introducing the public preview of Fabric remote MCP, a secure, cloud‑hosted execution engine that enables AI agents and automation tools to perform authenticated actions in Fabric.

We’re also enhancing our Git integration with selective branching, allowing developers to branch out for a specific feature and pull only the items they need. You also get improved change comparisons to more easily review recent updates, and new folder relationships which show how feature workspaces connect to source workspaces.

We’re also launching two open-source projects to help teams move faster with Fabric: Agent Skills for Fabric and Fabric Jumpstart. Agent Skills for Fabric is an open-source set of purpose-built plugins that let you use natural language in the GitHub Copilot terminal to harness the full power of Microsoft Fabric. Additionally, Fabric Jumpstart is designed to help you get off the ground with detailed guidance, reference architectures, and single‑click deployments for sample datasets, notebooks, pipelines, and reports.

Finally, we are announcing that the Fabric Extensibility Toolkit (FET), an evolution of the Workload Development Kit (WDK), is now generally available. Along with this release, we are enabling support for full CI/CD, variable library, and a new management experience in the Admin portal.

Read the Fabric Platform announcement blog
Migrating your existing Azure service to Fabric
As Fabric continues to grow in functionality, we are also simplifying the migration from other Azure services. In addition to our existing Synapse tooling, we are bringing new migration assistants for Azure Data Factory, Azure Synapse Analytics, and Azure SQL in public preview.

The new Fabric migration assistant for Azure Data Factory and Synapse Analytics helps move your existing pipelines and artifacts like Spark pools and notebooks into Fabric with minimal disruption. It’s designed to support incremental modernization, allowing teams to evaluate, convert, and optimize pipelines as they transition to Fabric. The migration assistant for SQL databases helps move SQL Server into Fabric by importing schemas through DACPACs, identifying and resolving compatibility issues with AI assistance, and guiding teams through assessment and data copy workflows for a smoother cutover.

See more Fabric innovation
Explore the AI shift with The Shift podcast
In addition to the announcements above, we are also rolling out a broad set of Fabric innovations across the platform. For a deeper look at the updates and what’s new this month, visit the Fabric March 2026 Feature summary blog, the Power BI March 2026 feature summary blog, and the latest posts on the Fabric Updates channel.

Explore additional resources for Microsoft Fabric
Sign up for the Fabric free trial. View the updated Fabric Roadmap. Try the Microsoft Fabric SKU Estimator.
Visit the Fabric website. Join the Fabric community. Read other in-depth, technical blogs on the Microsoft Fabric Updates Blog.
Read additional blogs by industry-leading partners
Sonata Software: Building an AI-ready data platform with data agents, ontology, and governance in Microsoft Fabric
Quadrant Technologies LLC: Real-Time Operational Intelligence in Microsoft Fabric: Deep Dive into RTI Capabilities, Anomaly Detection and Activator Alerting
Inspark: Why switch from Azure Synapse to Microsoft Fabric?
Esri: Unlock the power of location intelligence with ArcGIS for Microsoft Fabric
Dream IT Consulting Services: 8 Real-World Use Cases of Data Agents in Microsoft Fabric
UB Technology Innovations Inc.: From Data Platform to Decision Platform: How Microsoft Fabric and Copilot are Redefining Enterprise Analytics
Simpson Associates: Fabric Data Warehouse: Bringing Structure to Modern Data Strategies
Synapx Ltd.: Migrating Power BI to Microsoft Fabric Lakehouse with Medallion Architecture: A Strategic Imperative for Modern Construction Enterprises
Cloud Services: Real-Time Intelligence in Action: How Microsoft Fabric Helped Delfi Transform Its Newsroom
Cloud Services: Microsoft Fabric Data Agents: A New Reality
iLink Digital: Detect to Act in Seconds: How Real-Time Intelligence Is Rewriting the Rules of Emissions Management
Valorem Reply: How Nonprofits Are Rethinking Data with Microsoft Fabric

The post FabCon and SQLCon 2026: Unifying databases and Fabric on a single data platform appeared first on Microsoft SQL Server Blog.

]]>
Advancing agentic AI with Microsoft databases across a unified data estate http://approjects.co.za/?big=en-us/sql-server/blog/2026/03/18/advancing-agentic-ai-with-microsoft-databases-across-a-unified-data-estate/ Wed, 18 Mar 2026 12:45:00 +0000 Built on a consistent Microsoft SQL foundation from on premises to the cloud, Azure SQL brings AI capabilities directly into your database experience.

The post Advancing agentic AI with Microsoft databases across a unified data estate appeared first on Microsoft SQL Server Blog.

]]>
This week, we are excited to kick off SQLCon 2026 alongside FabCon in Atlanta. Bringing these SQL and Fabric communities together creates a unique opportunity to learn, connect, and share what’s next across the Microsoft databases portfolio.

This year is especially meaningful, as it marks the return of a Microsoft‑led SQL community event, while also showcasing how SQL continues to evolve as a critical part of Fabric. It is not just about new technology, but about reconnecting with each other and building the future of SQL together.

It’s inspiring to see the Microsoft SQL community continue to grow and engage, with user groups worldwide keeping conversations active across the SQL portfolio and a lot of customers using Microsoft SQL to innovate every day. With a comprehensive portfolio built on strategic common foundations and available across edge, PaaS, and SaaS, Microsoft databases form a unified platform for modern enterprise needs, whether you are migrating and modernizing, building cloud-native AI applications, or unifying your data.

Migrate and modernize with Azure SQL

Many of our customers are not modernizing in one big leap. You are evolving from SQL Server to hybrid and then to cloud services, and you want that journey to feel familiar, predictable, and low risk. That is exactly what Azure SQL is designed to deliver. Built on a consistent Microsoft SQL foundation from on premises to the cloud, Azure SQL brings AI capabilities directly into your database experience, along with enterprise‑grade security, high availability, and the flexibility to scale as your needs grow. Azure SQL is fully SQL compatible, delivers strong performance and low latency, and supports hybrid scenarios through Azure Arc.

AI agents are becoming an important accelerator for database migration and modernization at Microsoft, helping our customers reduce manual effort and move faster with more guided experiences across the journey. The general availability of GitHub Copilot in SSMS 22 is a great example of that investment in action: you can use the same GitHub Copilot experience you already use in Visual Studio and VS Code, now inside SQL Server Management Studio (SSMS), with chat and code assistance that helps you write, edit, and refactor T‑SQL more quickly and confidently. Whether you are a developer or database administrator (DBA), new to SQL or highly experienced, GitHub Copilot can support common workflows like improving queries and assisting with troubleshooting and administration tasks right where you work, and we are continuing to expand what it can do.

Today we are announcing savings plan for databases, a flexible, spend-based pricing option that helps you save up to 35%1 vs. pay-as-you-go prices on a one-year commitment. Savings plan for databases is designed for modern, evolving database environments: Customers commit to a fixed hourly spend for one year and receive lower prices across eligible Azure database services. Savings are automatically applied to the highest-value usage each hour, helping reduce costs while supporting migration, modernization, and architectural change.

Build cloud-native AI apps at scale

Once you move to the cloud, the questions shift. How do you build faster, scale smarter, and unlock more value from your data without re‑architecting everything you have already built? That is where Azure SQL Database Hyperscale comes in.

With Azure SQL Database Hyperscale, customers gain better price-performance, elastic scale and resilience for any workload, without the cost or disruption of rewriting T‑SQL or reworking operational models. Its unique architecture, built on shared storage and multiple replicas, allows you to scale reads independently from writes. With built‑in HTAP isolation, applications can handle massive transactional and analytical workloads without complex redesign. New capabilities now in public preview extend that foundation even further, including the SQL MCP Server for securely connecting SQL data to AI agents and Copilots, as well as larger 160 and 192 vCore options for high‑throughput workloads.

We’re delivering faster, more capable vector indexes to power AI applications. Recent enhancements improve vector search performance and efficiency with no code changes required. With full insert, update, and delete support, vector indexes stay current in real time, enabling dynamic applications. Features like quantization, iterative filtering, and tighter query optimizer integration provide faster, more predictable results, helping teams build responsive AI experiences directly on their SQL data.

Temenos built its next‑generation banking platform, Temenos Core, on Azure using Azure SQL Database Hyperscale to achieve global scale, high availability, and resilient performance. The platform processes billions of transactions daily and more than 17,500 transactions per second at peak. By building on Hyperscale, Temenos reduced onboarding time, accelerated innovation, and shifted banks from worrying about downtime to competing on availability and digital innovation.

Unify your data estate with SQL database in Fabric

We continue to raise the bar on enterprise readiness for SQL database in Fabric by bringing enterprise-grade security and compliance capabilities directly into the platform. Today at SQLCon, we announced the general availability of features including SQL Auditing, Customer‑Managed Keys, and Dynamic Data Masking, and the preview of workspace‑level Private Link. We brought these enhancements to help customers meet strict governance and regulatory requirements without adding operational complexity. The result is confidence that your SQL workloads in Fabric are secure, compliant, and ready for production.

SQL database in Fabric is becoming even more powerful for AI‑driven applications. The same vector indexing enhancements available in Azure SQL Database Hyperscale are now built into SQL database in Fabric as well. Because both are powered by the same Microsoft SQL engine, customers benefit from consistent performance, capabilities, and innovation across the SQL portfolio—making it easier to build intelligent applications wherever their data lives.

Finally, moving to SQL database in Fabric is simpler than ever. The Migration Assistant now supports SQL database in Fabric as a target destination. It provides a Copilot-assisted experience that helps SQL developers assess readiness, migrate schema, identify compatibility issues, and copy data with less manual effort. By preserving familiar SQL skills and workflows, customers can modernize at their own pace while accelerating time to value on Fabric’s unified analytics and AI platform.

There is one more Fabric innovation that matters deeply for how we deliver Microsoft databases as a unified platform. As applications grow more sophisticated, most organizations now rely on a mix of SQL and NoSQL databases across cloud, on‑premises, and edge environments. Provisioning, monitoring, and maintaining health across a growing database fleet often requires multiple tools and portals, making it harder to see what’s happening and manage at scale.

To address this, we are introducing the Database Hub in Microsoft Fabric, now available in early access. The Database Hub provides a unified database management experience that brings together databases across edge, cloud, and Fabric into one coherent view. From a single place, database teams can explore, observe, govern, and optimize their entire estate, including Azure SQL, Azure Cosmos DB, Azure Database for PostgreSQL, SQL Server enabled by Azure Arc, and Azure Database for MySQL without changing how each service is deployed or operated.

Built for scale, the Database Hub introduces an agent-assisted, human-in-the-loop approach to database management. Intelligent agents continuously reason over estate-wide signals to surface what changed, explain why it matters, and guide teams toward what to do next, while built-in observability, delegated governance, and Copilot-powered insights help teams move from insight to action with greater confidence. With the Database Hub, teams spend less time navigating tools and more time enabling what comes next: unlocking deeper integration across applications, analytics, and AI from a single control plane for the Microsoft databases portfolio.

Database Hub is available today in early access. Sign up today and see how the Database Hub can bring clarity and control to your database estate.

Moving forward with the SQL community

SQLCon is about bringing the SQL community together. It is about rebuilding connections and shared learning. It also reflects our long-term commitment to SQL. With a comprehensive portfolio built on strategic common foundations and available across edge, PaaS, and SaaS, Microsoft databases provide a unified platform for modern enterprise needs, whether you are migrating and modernizing, building cloud-native AI applications, or unifying your data. We are investing in SQL for the future, alongside the community that continues to shape it.

Finally, SQLCon is coming to Europe! Join the global data and SQL community from 28 Sep – 01st October, 2026 in Barcelona, Spain for hands-on learning, expert insights, and real-world stories. Register to be a part of it. I can’t wait to see you there.

Additional SQL resources


1Customers may see savings estimated to be between 0% and 35%. The 35% savings estimate is based on one Azure SQL Database serverless running for 12 months at a pay-as-you-go rate vs. a reduced rate for a 1-year savings plan. Based on Azure pricing as of March 2026. Prices are subject to change. Actual savings may vary based on location, database service, and/or usage. 

The post Advancing agentic AI with Microsoft databases across a unified data estate appeared first on Microsoft SQL Server Blog.

]]>
The year ahead for SQL Server: Ground to cloud to fabric http://approjects.co.za/?big=en-us/sql-server/blog/2025/01/15/the-year-ahead-for-sql-server-ground-to-cloud-to-fabric/ Wed, 15 Jan 2025 16:00:00 +0000 The “state of the union” in 2025 of Microsoft new releases and capabilities for SQL Server, Azure SQL, SQL database in Fabric, Copilots, and more.

The post The year ahead for SQL Server: Ground to cloud to fabric appeared first on Microsoft SQL Server Blog.

]]>
As we begin a new year in 2025, many of you are looking at new projects, new applications, trying to determine how to integrate AI into your business, modernizing your data estate, or considering an upgrade or a cloud migration. As you consider your options, let’s look at the state of the union in 2025 of Microsoft new releases and capabilities for SQL Server, Azure SQL, SQL database in Fabric, Copilots, tools, and developer experiences.

graphical user interface, application

SQL Server 2025

In November 2024, we announced the next major release of SQL Server: SQL Server 2025.

graphical user interface, text, application

SQL Server 2025, now in private preview, includes capabilities to build AI applications including vector and AI model management, on-premises or in the cloud. We continue to invest in security, performance, and availability. Another exciting area of investment in SQL Server 2025 are developer features such as a JSON type, RegEx, Change Event Streaming, and REST API support. Sign-up to work with us for the next release. I look forward in 2025 as we ship a public preview and the general availability of this exciting major release.

Here are a few resources where you can learn more about SQL Server 2025:

SQL Server enabled by Azure Arc

Azure Arc could be one of the most underused capabilities associated with SQL Server. The concept is amazingly simple. Instead of running SQL Server in Azure (that would be SQL Server in Azure Virtual Machine), you connect your existing SQL Server to Azure, whether it is running on-premises or another public cloud. Imagine using the Azure Portal to find out answers to questions like “What dbcompat levels are used across all my SQL Server instances?” Azure Arc has many other capabilities to help you manage your SQL Server instances, but a few I think you should look at are Microsoft Entra Authentication, Azure Migration, PAYG licensing, and ESU updates. Learn how to get started with Azure Arc.

Azure SQL

It is incredible to think that Azure SQL Database was launched almost 15 years ago as SQL Azure. Today Azure SQL is a brand that offers you the ability to run SQL Server in a Virtual Machine, a managed SQL instance, or a contained database. Each of these deployment options has continued innovations to accelerate development, deployment, and performance. SQL Server in Azure Virtual Machine continues to be a great option to lift and shift SQL Server, keep up to date with it here, but let’s look further at other Azure SQL options.

Azure SQL Managed Instance

The biggest new capability is Next-generation General Purpose service tier. This new deployment option offers a higher level of resources, better price/performance, more granular control of input/output (I/O) performance, and 500 databases per instance. I look forward in the future to seeing this become generally available. Keep up to date with all the latest announcements.

Azure SQL Database

We announced so many great new capabilities throughout 2024 including but not limited to:

  • Hyperscale Serverless and Elastic Pools.
  • Hyperscale performance and availability enhancements.
  • New developer features like a JSON data type (which is also available in all flavors of SQL).

It might be time for you to rethink Hyperscale. With its new pricing model, Serverless and replica capabilities, this can be a great option to start a new database deployment and have it autoscale per your needs. And do not forget to try out Azure SQL Database for free (not a trial). Keep up to date with all the latest announcements.

SQL database in Fabric

Microsoft Fabric is a unified data platform. Up until now, most of the capabilities in Fabric were more centered around analytics. Now there is an operational database built within the Fabric, and it uses SQL Server!

graphical user interface, diagram, application

SQL database in Fabric brings the power of Azure SQL Database deeply integrated into the Fabric ecosystem. Using the same database engine as SQL Server and Azure SQL, SQL database in Fabric is both familiar and innovative. Deploy a database in seconds, build a new AI application easily within the Fabric platform with CI/CD and GraphQL built-in. And all are integrated within the Fabric user experience and platform.

There is much more coming in this calendar year for SQL database in Fabric. Give it a spin today with a free Fabric trial capacity.

Tools and Copilots

We made big investments in 2024 in our tools and will continue to do more in this calendar year, but the most significant announcements were the revival of SQL Server Management Studio (SSMS) and new AI-assisted experiences.

SQL Server Management Studio

We accelerated the future investment of SSMS with enhancements to the latest release, SSMS 20. Proving that SSMS is back, we also announced a significant new preview release SSMS 21 which includes:

  • A new shell based on the latest Visual Studio.
  • New installer and update experience.
  • Dark theme.
  • 64bit support.
  • Git support.

There is more to come in 2025 as we iterate on the current preview. Try out the new SSMS. In addition, we have a preview for a Copilot in SSMS.

AI-assistance in Azure SQL and SQL database in Fabric

We introduced an AI-assisted experience in the framework of Microsoft Copilot in Azure. Using your database context in the Azure Portal, you can type in prompts like, “my database is slow” and get fast and guided advice on performance troubleshooting scenarios. Try this out yourself. SQL Database in Fabric offers AI-assisted capabilities in the Query Editor and as a sidecar chat experience.

We believe AI-assisted capabilities can help both developers and administrators for SQL ground to cloud to fabric so we will continue to invest and innovate everywhere SQL exists in the future.

AI applications

The future of data-driven applications is to use AI. We believe the future is now, so we want to invest in capabilities inside the database engine to power your new AI applications, whether you are building retrieval-augmented generation (RAG) applications, chat-based applications, or AI agents. We also have a great solution outside of the SQL engine using Azure AI Search with SQL.

We believe SQL makes a compelling solution because you can build operational RAG applications using the security and scalability of the database engine using the familiarity of the T-SQL language. This includes access to AI models in Azure OpenAI, a new vector type, vector functions, and soon to be in the future vector search using vector indexes, built on the popular Microsoft vector indexing technology, DiskANN. SQL Server 2025 will include access to AI models on-premises or in the cloud. We also have solutions well integrated with frameworks like LangChain and Semantic Kernel.

Check out our demo at Microsoft Ignite to show AI applications for SQL everywhere they exist. Keep up with the latest for our AI application capabilities at intelligent applications, SQL AI samples, and this SQL AI workshop.

Fabric Mirroring

We have seen the rising popularity of Microsoft Fabric as a unified data platform. We want to be sure you can easily integrate your SQL data, wherever it exists, into Fabric. Therefore, we introduced the concept of Fabric Mirroring of Azure SQL Database. This provides a zero-ETL method to access your data separate from the operational database for near-real time analytics. This includes automatic changes fed into Fabric as you modify your source database and free Mirroring storage for replicas tiered to Fabric capacity. You can get started today for Azure SQL Database.

To ensure you can mirror any SQL database, we announced public preview for mirroring for Azure SQL Managed Instance and a private preview for SQL Server. You can also sign-up for the preview here.

Learn more at upcoming events

As you plan out the first few months of the year, consider these events where Microsoft and others from the community will teach you all of these new innovations.

VS Live Las Vegas 2025

This is one of the premier events focused on developers. Use the discount code WARD and register today.

Fabric Community Conference 2025

For the first time, the Microsoft Fabric Community Conference is putting SQL Server center stage. Join us at this incredible community event for a deep dive into SQL Databases in Fabric and get a preview of SQL Server 2025. The SQL Dream Team will be there. Shireesh Thota, Erin Stellato, Joe Sack, Muazma Zahid, Davide Mauri and I will be leading sessions. As well as SQL Community Legends – Denny Cherry, Grant Fritchey, Monica Rathbun, Anthony Nocentino, John Morehouse, Joey D’Antoni and more! Register with code MSCUST and get $150 off. Workshops sell out weeks in advance so save your spot now.

And we will be at more events in the upcoming calendar year. Here is to all our customers and community for a successful and momentous year in 2025 for SQL Server, from ground to cloud to fabric.

The post The year ahead for SQL Server: Ground to cloud to fabric appeared first on Microsoft SQL Server Blog.

]]>
Modernize your database with the consolidation and retirement of Azure Database Migration tools http://approjects.co.za/?big=en-us/sql-server/blog/2024/09/12/modernize-your-database-with-the-consolidation-and-retirement-of-azure-database-migration-tools/ Thu, 12 Sep 2024 15:00:00 +0000 By migrating their databases to Azure, customers like Ernst and Young are modernizing their data estate and leveraging cutting-edge cloud innovations.

The post Modernize your database with the consolidation and retirement of Azure Database Migration tools appeared first on Microsoft SQL Server Blog.

]]>
Simplifying Database Migrations with Azure SQL 

By migrating their databases to Azure, customers like Ernst and Young are modernizing their data estate and leveraging cutting-edge cloud innovations. However, the migration process can be complex, whether moving within the same database management system (homogeneous) or between different systems (heterogeneous). Microsoft offers a suite of tools for migration to simplify the migration process. To further enhance the user experience, we are streamlining the Azure database migration tools ecosystem. This involves retiring certain overlapping tools to simplify finding the right tool and provide unified migration experiences across all phases of migration. As part of this effort, effective 12/15/2024 we are replacing some tools with unified experiences that offer capabilities across various migration stages in the drive to modernize their data estate and take advantage of innovation in the cloud.

man standing in front of computer screens

Azure Database Migration Guides

Step-by-step guidance for modernizing your data assets

With a refined set of tools, you can confidently plan, assess, and execute your database migration with minimal downtime, ensuring a smooth transition to Azure SQL. Post the 12/15/24, retirement date, Microsoft will stop supporting these tools for any issues that arise and will not issue any bug fixes or further updates. Here is the list of tools that are planned for retirement and Microsoft recommended replacement tools.

ToolRetirement Date Recommend replacement
Database Migration Assessment for Oracle (DMAO) is an extension in Azure Data Studio that helps you assess an Oracle workload for migrating to Azure SQL and Azure Database for PostgreSQL. 12/15/2024 For Azure SQL target assessments switch to using assessment and Azure SQL target recommendation capabilities in SQL Server Migration Assistant (SSMA) for performing Oracle to Azure SQL assessments in your migration journey to Azure SQL. For PostgreSQL target assessments switch to using Ora2PG Migration cost assessment capabilities to get Azure PostgreSQL target recommendations. 
Database Schema conversion Toolkit (DSCT) is an extension for Azure Data Studio designed to automate database schema conversion between different database platforms.12/15/2024 Switch to using conversion assessment and converting Oracle Schemas capabilities in SQL Server Migration Assistant (SSMA) for Oracle to Azure SQL conversions in your migration journey to Azure SQL.
Database Experimentation Assistant (DEA) is an experimentation solution for SQL Server upgrades. DEA can help you evaluate a targeted version of SQL Server for a specific workload. 12/15/2024 Use open-source tools like SQLWorkload, which is a collection of tools to collect, analyse and replay SQL Server workloads, on premises and in the cloud.
Data Access Migration Toolkit (DAMT) is a VS Code extension that help users identify SQL code in application source code when migrating from one DB to another and identify SQL compatibility issues. Supported source database backends include IBM DB2, Oracle Database and SQL Server. 12/15/2024 For identifying the SQL queries in source code, our recommendation is to use Regex or parse the application code either manually or with custom-built tools to identify T-SQL embedded in the application code. For identifying compatibility between your source SQL Server and the target Azure SQL, please use assessment capabilities available in SQL Server enabled by Arc or Azure SQL Migration extension for Azure Data Studio or using Azure Migrate SQL Assessment capabilities. 

With the retirement of Database Migration Assistant for Oracle (DMAO), Database Schema Conversion Toolkit (DSCT), Data Access Migration Toolkit (DAMT), Database Experimentation Assistant (DEA), the Azure database migration tooling ecosystem is greatly simplified. Here is Microsoft’s recommendation for database migration tools for customers moving to Azure SQL. 

Homogenous migrations (SQL Server to Azure SQL) 

If the SQL Server that will be migrated is already enabled by Azure Arc, you can use Arc capabilities to perform a migration assessment and get optimal Azure SQL Target recommendations. Additionally, SQL Server enabled by Azure Arc provides multiple Azure benefits to SQL Servers outside Azure like automated backups and patching, Microsoft Defender for SQL, inventory of instances and databases, and Entra ID support. By enabling these Arc features, you can leverage cloud automation and security for Azure SQL Server even before you migrate. 

If the SQL Server outside Azure is not inventoried yet, you can use Azure Migrate for discovery, assessment and business case to know the right Azure SQL targets for your on-premises SQL Workloads and to get the projected cost savings of migrating to Azure SQL.

To migrate SQL Server into an Azure Virtual Machine with the same configuration as the source, users can use Azure Migrate to perform lift and shift migrations. SQL Server on Azure Virtual Machines allows you to easily migrate your SQL Server workloads to the cloud, offering SQL Server’s performance and security along with Azure’s flexibility and hybrid connectivity to address urgent business needs. Later you can evaluate one of the Azure SQL PaaS targets (Azure SQL Managed Instance, Azure SQL Database) and modernize to a PaaS service for better cost and workload performance optimizations. 

If you have completed an assessment and are ready to move to Azure SQL Managed Instance or Azure SQL Database, you can start your migration journey with Azure Migrate, you can use Azure Database Migration service or Azure SQL Migration extension for Azure Data Studio can be used. 

If the SQL Server estate is already inventoried, users can use Azure SQL Migration extension for Azure Data Studio to complete the entire migration journey i.e., perform assessment, get Azure SQL Target recommendations and perform migrations.

Heterogenous migrations (non-SQL Server databases to Azure SQL) 

With the availability of Target Assessment and SKU recommendation capabilities in SQL Server Migration Assistant (SSMA) along with existing code conversion and migration capabilities, SSMA becomes a single tool that you need to use to migrate from other source database platforms like Oracle, DB2, SAP ASE, MySQL, Access to Azure SQL or SQL Server. 

Learn more about modernizing your databases with Azure

The post Modernize your database with the consolidation and retirement of Azure Database Migration tools appeared first on Microsoft SQL Server Blog.

]]>
Modernize Microsoft SQL Server 2014 workloads with Azure http://approjects.co.za/?big=en-us/sql-server/blog/2024/08/14/modernize-microsoft-sql-server-2014-workloads-with-azure/ Wed, 14 Aug 2024 16:00:00 +0000 As of July 9, 2024, SQL Server 2014 has reached its end of support.

The post Modernize Microsoft SQL Server 2014 workloads with Azure appeared first on Microsoft SQL Server Blog.

]]>
We take pride in delivering innovation with each new version of Microsoft SQL Server. However, there comes a time when product lifecycles must conclude. As of July 9, 2024, SQL Server 2014 has reached its end of support. Many of our customers, including Scandinavian Airlines, have begun transitioning their SQL workloads to Microsoft Azure or are updating to SQL Server 2022. Their objective is straightforward: to modernize their databases and applications while accelerating innovation through using cloud technologies. 

“With our migration to PaaS, we got what we wanted: greater scalability, reliability, security, agility in managing our IT infrastructure—and greater peace of mind—all without the cost and hassle of doing this ourselves,” 

Daniel Engberg, Head of AI, Data, and Platforms at Scandinavian Airlines System  
small business owner on computer

Migrate to Microsoft Azure

Boost productivity and enable innovation.

This blog post will guide you through several best practices our customers employed when faced with the SQL Server end-of-support moment. Customers have three choices for handling their out-of-support SQL Server workloads: moving or updating to Azure, upgrading to SQL Server 2022, or getting Extended Security Updates (ESUs) for additional preparation time. 

Migrate and modernize to Azure, a smooth path, a more powerful destination 

Migrating to a cloud platform is an essential step on the journey to modernization, and there are many choices. What makes SQL Server and Microsoft Azure SQL unique is that it’s built on the same engine, no matter where you deploy, which means you can build on your existing SQL experience while gaining the layered security, intelligent threat detection, and data encryption that Azure provides. 

Modernizing to Microsoft Azure SQL Managed Instance offers cost savings, scalability, security, seamless migration, productivity, and always up-to-date features. Some of the recent product highlights include Azure SQL Managed Instance Next-gen General Purpose, now in public preview, which supports twice as many Azure VMs configurations, making migration and modernization faster and easier than ever before for a larger number of customer scenarios. Customers can experience the full capabilities of managed SQL Server in the cloud at no cost for the initial 12 months with access to a General Purpose instance capable of accommodating up to 100 databases, along with 720 vCore hours of compute per month (non-accumulative) and 64 GB of storage through Azure SQL Managed Instance Free Tier, now in public preview. 

Modernizing your SQL Server workloads to Azure also presents a chance to utilize cutting-edge innovation like Microsoft Copilot. Microsoft Copilot in Azure has extended its capabilities to Microsoft Azure SQL Database with new skills designed to enhance the management and operation of SQL-based applications.  

Extending end-of-support time

If you are ready to move to the cloud but feel challenged to upgrade or modernize before the end of the support timeline, Extended Security Updates are available for free in Azure for SQL Server 2014 and 2012 and Windows Server 2012. Secure your workloads for up to three more years after the end of the support deadline by migrating applications and SQL Server databases to Microsoft Azure Virtual Machines. Free Extended Security Updates are available for Azure Virtual Machines including Microsoft Azure Dedicated Host, Microsoft Azure VMWare Solution, Nutanix Cloud Clusters on Azure, and Microsoft Azure Stack (Microsoft Azure Stack Hub, Microsoft Azure Stack Edge, and Microsoft Azure Stack HCI). Combining Extended Security Updates in Azure with Azure Hybrid Benefit further reduces your costs. With these pricing advantages, AWS is up to five times more expensive than Azure for SQL Server and Windows Server end-of-support workloads. 

In-place upgrade to SQL Server 2022 

Another way to stay protected is to upgrade your SQL Server to SQL Server 2022, the most Azure-enabled release yet. Get more out of your data with enhanced security, industry-leading performance and availability, and business continuity through Azure. 

SQL Server 2022 is the most Azure-enabled release of SQL Server, with continued innovation across performance, security, and availability. Gain deeper insights, predictions, and governance from your data at scale. Take advantage of enhanced performance and scalability with built-in query intelligence. 

Stay protected on-premises or in multi-cloud environments with Azure Arc 

Just as with SQL Server 2012, Extended Security Updates for SQL Server 2014 offers an enhanced cloud experience through Microsoft Azure Arc. First year coverage from Extended Security Updates started on July 10, 2024. With this more customer-centric approach, security updates will be natively available in the Microsoft Azure portal through Azure Arc. This also provides Azure benefits and flexible subscription billing for SQL Server 2014 workloads on-premises or in multi-cloud environments. 

We’re continuing to enhance the capabilities Azure Arc offers to Extended Security Updates. Just recently, physical-core licensing with unlimited virtualization was released for SQL Server 2012 and 2014 ESUs. For customers who need to maximize database performance or require security isolation and better resource management, physical core licensing provides a more cost-effective way to leverage Extended Security Updates via Azure Arc. 

Also, if you enabled ESU subscription in your production environment managed through Azure Arc, you can enable SQL Server ESU subscription in the non-production environment for free, through SQL Server Developer Edition or an Azure dev/test subscription. 

We encourage all our customers running SQL Server 2014, Windows Server 2012, and Windows Server 2012 R2 to start planning for the end of support. We have migration resources, best practices, and more, as well as a rich ecosystem of partners ready to help. To get started, please visit the following pages to learn more. 

Learn More 

The post Modernize Microsoft SQL Server 2014 workloads with Azure appeared first on Microsoft SQL Server Blog.

]]>
Why migrate Windows Server and SQL Server to Azure: ROI, innovation, and free offers http://approjects.co.za/?big=en-us/sql-server/blog/2024/04/25/why-migrate-windows-server-and-sql-server-to-azure-roi-innovation-and-free-offers/ Thu, 25 Apr 2024 15:00:00 +0000 Learn more on how we're connecting with customers talking about the value of migration.

The post Why migrate Windows Server and SQL Server to Azure: ROI, innovation, and free offers appeared first on Microsoft SQL Server Blog.

]]>
Hey everyone!  

We’ve been on the road the last couple of weeks at MVP Summit, SQLBits and Fabric Con, connecting with customers talking about the value of migration and modernization. We want to dig into specifically, how Azure can deliver real business value through cost optimization and streamlined productivity for their Windows Server and SQL Server deployments when they migrate to Azure. 

We’ve helped countless organizations migrate their SQL Server and Windows workloads to Azure a critical 1st step in any transformation initiative. The move can help improve cybersecurity posture and business continuity, boost productivity, and lay the foundation for AI and other highly scalable data innovations, while automating updates, backups, and other time-consuming IT tasks. 

Modernize and lower total cost of ownership (TCO) 

Migration is a business strategy that pays off. In The Business Value of Microsoft Azure SQL Database and Azure SQL Managed Instance Workload,1 organizations that migrated to Azure SQL Managed Instance and Microsoft Azure SQL Database can get up to 406 percent return on investment over 3 years and can expect a 30-percent reduction in TCO over 5 years, protecting an additional $6.85 million in annual revenue.

A separate study found that customers that migrated both Windows Server and SQL Server workloads to Azure generated more value. According to The Business Value of Microsoft Azure for SQL Server and Windows Server Workloads,2 by optimizing costs, operations, and business opportunities, companies gained $15.85 million in total annual benefits while also increasing IT security efficiency by 43 percent with cloud tools and automation.

a group of people sitting at a table with a laptop

Azure SQL

Migrate, modernize, and innovate with the modern SQL family of cloud database services.

A smooth path to migration, a more powerful destination

Migrating to a cloud platform is an essential step on the journey to modernization, and there are many choices. 

What makes SQL unique is that it’s built on the same engine, no matter where you deploy, which means you can build on your existing SQL experience while gaining the layered security, intelligent threat detection, and data encryption that Azure provides. And as we shared with customers at SQLBits, there’s now an even more powerful option available for customers looking to leverage the full PaaS experience. Azure SQL Managed Instance Next-gen GP  brings significantly improved performance and scalability to power up your existing Azure SQL Managed Instance fleet, and help bring more mission-critical SQL workloads to Azure. With close to 100 percent feature compatibility with SQL Server, Azure SQL Managed Instance is the recommended choice to migrate and modernize SQL apps at scale and at your own pace.

Another option many of our customers start with is by running their Windows Server workloads on Azure Virtual Machines, benefiting from a simplified, managed experience and cloud-native support for SQL Server, .NET apps, and Remote Desktop Services. Or you can modernize your entire Windows Server estate, choosing from more than 200 Azure services and capabilities, including support for hybrid environments. 

Take the first step or the next: You have choices

When it comes to migration, Azure meets you where you are with options for moving on-premises workloads and for developing new cloud solutions. For example, many organizations start by moving Windows Server workloads to Azure Virtual Machines, enabling them to easily scale to support new developments and more efficiently manage peak loads. Hokkoku Bank took this step, migrating its Windows Server–based estate to Azure as part of a cloud-first initiative. Azure supports the bank’s modernization plans and helps provide a disaster recovery solution in an earthquake-prone region.  

Correios de Portugal, the country’s postal service, migrated its Windows Server workloads to Azure Virtual Machines backed by Azure SQL, which provides a smooth path to a cost-effective, highly scalable, fully managed PaaS database. It’s the best choice for modernizing your apps and getting the most out of your existing investments.

Many of our database customers move to SQL Server on Azure Virtual Machines for the cost benefits on top of the scalability and resilience of Azure. As an example, healthcare software manufacturer Allscripts migrated on-premises applications to Azure SQL Database Managed Instance when possible, but another 600 on-premises VMs needed a different migration approach. Allscripts moved them to SQL Server on Azure Virtual Machines, a quick, low-risk step for workloads it plans to optimize and modernize later. The lift-and-shift approach can be an easy first   step in your cloud journey.

Azure also offers hybrid solutions that bridge your on-premises and cloud resources. For example, you can move or extend on-premises VMware environments using Azure VMWare Solution. You can even use the free Windows Admin Center tool to manage across Windows Server environments—physical, virtual, on-premises, in Azure, or in a hosted environment—at no additional cost. To get started with a Windows Server migration, start discovering and assessing on-premises resources using the free Azure Migrate tool.

Watch the Migrate to Innovate digital event on demand and learn the business benefits of migrating to Azure.

Try it for free 

If you want to know how your workload will perform before migrating, try these Azure offers and get started building that proof-of-concept.  

  • Try Azure SQL Managed Instance for free. For 12 months, you can get up to two instances per Azure subscription, 750 vCore hours of compute per month, and 32 GB data storage and 32 GB backup storage per month. 
  • Try Azure SQL Database for free. Test and develop applications or run small production workloads for free. This offer provides the first 100,000 vCore seconds, 32 GB of data, and 32 GB of backup storage per month at no charge for the lifetime of your subscription. 

Learn more about Azure SQL

Stay tuned for more migration announcements in the coming months. To get started now: 

  • Discover why cloud economics make sense and get greater return on your investment. 

  1. IDC report, The Business Value of Microsoft Azure SQL Database and Azure SQL Managed Instance Workloads, IDC #US51073123, August 2023. 
  2. The Business Value of Microsoft Azure for SQL Server and Windows Server Workloads

The post Why migrate Windows Server and SQL Server to Azure: ROI, innovation, and free offers appeared first on Microsoft SQL Server Blog.

]]>
Power what’s next with limitless relational databases from Azure http://approjects.co.za/?big=en-us/sql-server/blog/2023/11/15/power-whats-next-with-limitless-relational-databases-from-azure/ Wed, 15 Nov 2023 16:00:00 +0000 We were excited to get back in front of customers at Microsoft Ignite 2023 and PASS Data Community Summit.

The post Power what’s next with limitless relational databases from Azure appeared first on Microsoft SQL Server Blog.

]]>
At Microsoft, we’re seeing firsthand how data is powering incredible innovation and accelerating more than just a platform shift, it is changing the way we do everything. AI and generative AI are not futuristic abstract concepts, they are being deployed by millions every day, transforming every industry. Tapping into the full potential of that opportunity requires the right platform, powered by the right combination of powerful applications and limitless databases.  

We are excited to get back in front of customers at Microsoft Ignite 2023 and PASS Data Community Summit to announce powerful product enhancements across Microsoft Azure databases designed to help customers take the next step or the first step in their transformation journey, with databases that are intelligent, trusted, and ready for developers to build, without limits.   

Limitless innovation for cloud native applications

If you are an application developer looking for a flexible relational cloud database solution with performance and scalability to support your most demanding applications, you’ll want to check out Microsoft Azure SQL Database Hyperscale. Built on a unique architecture that splits the storage and compute nodes, these resources scale independently to meet the unique requirements of your apps. Plus, you can eliminate the need to pre-provision storage resources, as the storage automatically scales to meet demand, with support of up to 100 TB. We are thrilled to announce that we are introducing lower compute pricing on SQL Database Hyperscale, saving customers up to 35 percent on their compute bill. Effective December 15, 2023, customers will have competitive pricing on the resources they need to build scalable, secure, AI-ready applications. 

We’re also excited to share that the Microsoft Azure SQL Managed Instance feature wave has reached general availability. This set of features improves Azure SQL Managed Instance’s performance, reliability, and security. The latest release will deliver deeper integration with Microsoft SQL Server on-premises and the wider Azure service platform. And soon, customers will be able to start testing Azure SQL Managed Instance for free. Landing in December 2023, customers will be able to run proof of concepts, test applications or simply learn more about the operational benefits of a fully managed database-as-a-service. This is in addition to the free Azure SQL Database offer that launched in October 2023.  

Microsoft is also excited to share the newest updates for our fully managed community based open-source databases. These services help you manage your database and database infrastructure with automation, freeing you from the routine database management tasks so you can concentrate on what matters most.

Enhanced performance and scalability for Microsoft Azure Database for PostgreSQL

The latest enhancements for Azure Database for PostgreSQL deliver advanced storage and compute capabilities that enable optimal price-performance for enterprise production workloads. Customers can expect enhancements for advanced storage, compute capabilities, and flexibility for managing performance and cost.  

Azure Database for PostgreSQL extension for Azure AI

The PostgreSQL extension for Azure AI allows developers to use large language models (LLMs) and build rich PostgreSQL generative AI applications, meaning PostgreSQL queries on Azure can now power Azure AI applications. It enables calling into Microsoft Azure OpenAI Service to generate LLM-based vector embeddings that allow efficient similarity searches, which is particularly powerful for recommendation systems, as well as calling into Azure AI Language for a wide range of scenarios such as sentiment analysis, language detection, entity recognition, and more.

New performance enhancements in Microsoft Azure Database for MySQL Business Critical 

New performance enhancements in Azure Database for MySQL Business Critical service tier makes it ideal for high-performance transactional or analytical applications. In fact, a recent performance benchmark study by Principled Technologies shows that Azure Database for MySQL Business Critical service tier is up to 50 percent faster than MySQL on Amazon Web Services Relational Data Service and up to 2.26 times faster than Google Cloud Platform Cloud SQL for MySQL. These key innovations help make Azure Database for MySQL Business Critical the perfect option to run mission-critical, Tier 1 MySQL workloads.

Extend Azure to your entire data estate

For all the innovation that customers are driving in the cloud, we recognize much of the customer’s data remains on-premises. This is why Microsoft continues to invest heavily in ensuring that customers can get the most from their entire data estate with Microsoft Azure Arc. The latest monitoring capabilities from SQL Server-enabled by Azure Arc are designed to deliver critical insights across your entire SQL Server environments, optimizing database performance and delivering fast diagnostic times.  

Customers can also now improve SQL Server business continuity and consistency by viewing and managing Always On availability groups, failover cluster instances, and backups directly from the Azure portal. This capability provides better visibility and an easier, more flexible way to configure critical database operations.  

In addition, with Extended Security Updates as a service and automated patching, customers can always keep their apps secure, compliant, and up to date. Learn more about these latest features.

We look forward to the week ahead and connecting with you in person.

The post Power what’s next with limitless relational databases from Azure appeared first on Microsoft SQL Server Blog.

]]>
Register today—Free Azure Data training at Data Platform Summit 2022 http://approjects.co.za/?big=en-us/sql-server/blog/2022/08/23/register-today-free-azure-data-training-at-data-platform-summit-2022/ Tue, 23 Aug 2022 15:00:00 +0000 Data Platform Virtual Summit 2022 (DPS 2022), a free global learning event for data professionals, is just a few weeks away running from September 19–23.

The post Register today—Free Azure Data training at Data Platform Summit 2022 appeared first on Microsoft SQL Server Blog.

]]>
Data Platform Virtual Summit 2022 (DPS 2022) is just a few weeks away, running from September 19–23. A free global learning event for data professionals, DPS 2022 features a keynote from Bob Ward (Principal Architect, Microsoft) and Buck Woody (Applied Data Scientist, Microsoft), as well as more than 200 breakout sessions delivered by Azure Data engineering, partner organizations, and community leaders. With content delivered almost around the clock in five editions, DPS 2022 empowers Azure Data professionals worldwide with the deep technical skills they need to move ahead in their careers and digitally transform their organizations.

This year, DPS 2022 features eight tracks focusing on Azure Data: 

  • Architecture 
  • Azure Data Administration 
  • Azure Data Development 
  • Business Intelligence & Advanced Analytics 
  • Data Science (AI/ML)
  • Industry Solution 
  • SQL Server 2022 
  • Professional Development 
  • Student Track 

The virtual platform offers a live Q and A session, a networking lounge, and a community zone. Additionally, attendees will receive lifetime on-demand access to session recordings. 

DPS 2022 offers an incredible opportunity to learn directly from our engineering teams, who will share the latest advances and insights on the Azure Data platform. 

  • Bob Ward and Buck Woody from the Microsoft Data Platform team will deliver the keynote together. The keynote will highlight the latest innovations across the Microsoft Azure Data platform and share customer case studies. Bob and Buck will show you how to meet the needs of developers across the spectrum to leverage the new tools, processes, and platforms in the Intelligent Data Platform to create a data culture in your organization. Bob will go deep on the newest release in the Intelligent Data Platform—SQL Server 2022
  • Microsoft Azure Data engineering teams will deliver over 65 sessions at DPS 2022. Hear the latest from the people who develop the tools you use every day and engage in live discussions. 
  • Visit the virtual expo hall where you can connect with our team across SQL Server, Azure SQL, Microsoft Learn, Power BI, and more. 

Register now for a week of free training at the Data Platform Virtual Summit and receive all the session recordings also for free (streaming access). 

The post Register today—Free Azure Data training at Data Platform Summit 2022 appeared first on Microsoft SQL Server Blog.

]]>
Improve scalability with system page latch concurrency enhancements in SQL Server 2022 http://approjects.co.za/?big=en-us/sql-server/blog/2022/07/21/improve-scalability-with-system-page-latch-concurrency-enhancements-in-sql-server-2022/ Thu, 21 Jul 2022 15:00:00 +0000 Over the past several SQL Server releases, Microsoft has improved the concurrency and performance of the tempdb database.

The post Improve scalability with system page latch concurrency enhancements in SQL Server 2022 appeared first on Microsoft SQL Server Blog.

]]>
Part of the SQL Server 2022 blog series.

Over the past several SQL Server releases, Microsoft has improved the concurrency and performance of the tempdb database. In SQL Server 2022 we are addressing one of the last areas of contention by introducing concurrent global allocation map (GAM) and shared global allocation map (SGAM) updates which will give SQL Server 2022 a big improvement for scalability as tempdb is arguably the most important database in your environment.

Tempdb performance challenges

Historically, tempdb has been one of those common pain points in SQL Server. Why was it a pain point? Well, usage is one of the key reasons. By usage, we are referring to creating temp tables and other user objects, but tempdb is also used internally to spill to disk when there isn’t enough memory available for a process or there is an inaccurate estimate that causes SQL Server to spill to tempdb.

What is the tempdb database?

Tempdb is a special purpose system database, but the structure is essentially just like any other user database. As the name suggests, the tempdb database was designed for temporary storage meaning that nothing written to tempdb is intended to be persisted.
What is important to know is that while SQL Server uses the tempdb database for nearly every SQL Server workload, there is only one tempdb per SQL Server instance and tempdb is recreated every time SQL Server is restarted.

Tempdb workloads

The main difference between tempdb and other databases is the workload. With tempdb we are constantly creating and destroying objects such as temp tables. This is especially true in heavy OLTP environments where you may have many threads doing all kinds of work and if you are already seeing resource contention on the system the impact will be amplified.

So, what is stored in tempdb?

Of course, temp tables and table variables will go into tempdb – this is usually the first object type we think about. So, whenever you create a temporary table in a stored procedure or in a regular batch, that is going to go to tempdb.

Row versions go into tempdb, if you are using snapshot isolation, read committed snapshot where every time a row is modified by a specific transaction, the database engine will store a version of the previously committed image of the row in tempdb.

Hash operations will spill to tempdb. Worktables are also used for spools, cursors, sorts, and temporary large object (LOB) storage – this will all go to tempdb.

Triggers use the row version store in tempdb—this will go to tempdb.

Online index operations—if you are maintaining your indexes with the ONLINE ON keyword then we are creating temporary shadow copies in tempdb.

DBCC CHECKDB also creates shadow copies in tempdb

As you can see, there’s a lot that goes into tempdbtempdb will be used for user objects such as global or local temporary tables and indexes, stored procedures, table variables, table-valued functions, and cursors. But we also use tempdb for internal scenarios—such as spills to disk and as worktables that store intermediate results for spools and sorts.

What causes contention in tempdb?

Since tempdb is used for so many different scenarios, there is only one tempdb database per SQL Server instance, and we have started pushing towards bigger machines with larger workloads, we have started seeing concurrency issues emerge in the tempdb space in three key areas:

  1. Object allocation contention
  2. Metadata contention
  3. Temp table cache contention

What is object allocation contention?

Again, tempdb is structured just like any other database, but remember—the workloads are different in tempdb, so object allocation contention matters more because of the constant creation and destruction of objects.

On a server that is experiencing object allocation contention, you may notice severe blocking especially when the server is experiencing a heavy load. As a result, SQL Server workloads will be slowed, but the server’s CPU may appear to be underutilized. This is because the contention resides in the system metadata. To help avoid these areas of contention, a number of long-standing best practices have been recommended by SQL Server support teams.

For tempdb, one of the key best practices has always been to create multiple primary data files (mdf) at the same size and same growth rate. The reason for this was to help alleviate the object allocation contention by distributing tempdb activity across multiple partitions.

SQL Server databases must have a primary data file that uses the .mdf file extension by default and a log data file that uses the .ldf file extension. The primary tempdb data file, tempdb.mdf has key pages that track how objects are allocated in SQL Server. As we see in the image below, the table under the tempdb.mdf title represents pages.

Page types used in the data files of the tempdb database.

Page 0 is a header page, and this is true for any primary or secondary data files in SQL Server.

Page 1 is what is called a page free space (PFS) page which is used any time SQL needs to allocate space to an object. Basically, the PFS page contains info on how full the pages are, for the next 8088 pages, in the database. If SQL Server needs to add some data, SQL Server uses the PFS page to see how full the associated object is to see where the data can fit.

After 8088 pages, there is another PFS page in the same data file – it repeats itself. So, you will have more than one PFS page depending on how large the file is.

Page 2 is always the global allocation map (GAM), and this is where the extent allocation comes in as the GAM tracks when SQL Server needs to allocate a uniform extent.

An extent in SQL Server is comprised of 8 x 8KB pages, 64KB, and this is usually the unit of data allocation so if you have a table that’s larger than eight pages, any time we allocate space to that table we will create a full extent, and this is a uniform extent because all eight pages in that extent belong to that object.

So, any time SQL needs to allocate a uniform extent to an object SQL will go to the GAM page and check the availability. The GAM is a bitmap, so if the bit is 1 then that extent is available to be allocated, and if it is 0 then it is not available to be allocated. Once SQL Server has allocated the extent, it just flips the bit for that GAM page from 1 to 0 to show that it is no longer available.

The larger the file, the more GAM Pages you will have, for the GAM you will get another GAM page after 63,904 extents.

Page 3 is for the shared global allocation map (SGAM) and this page is used if SQL needs to allocate space on a mixed extent. This SGAM page tracks mixed extent usage for when an extent is being used by more than one object. So, if I have an object that is less than eight pages and I don’t want to allocate a full extent, we will use a mixed extent. By default, when a brand-new object is created the first eight pages will be allocated on a mixed extent.

The SGAM is a bitmap, so if the bit is 1 then it’s being used as a mixed extent and has space available to be allocated, then we would look for the corresponding PFS page to find the empty pages within that extent, and then we would allocate that page. The important point here is the SGAM is used in conjunction with the PFS page in order to allocate space on a mixed extent and again after about 64,000 extents you get another SGAM on the same file.

When you have multiple files in tempdb, you get another immediate header, PFS, GAM, and SGAM because all files will have the same structure and with multiple files we try and share the workload over these files.

SQL Server spreads out object allocations across files in the same filegroup based on the proportional fill algorithm. We try to keep the same percentage of free space in each file within the file group so if all the files in the file group start at the same size and they stay the same size and they have the same amount of free space then we turn that proportional fill algorithm into a round robin algorithm—each subsequent allocation will hit the next file and so on—and that was the reason we recommend having multiple files at the same size—spreading object allocations across all the files allows you to get around this object allocation bottleneck. That recommendation came out in SQL Server 2000, and it is still true in SQL Server 2019.

Multiple files equally sized is our best practice and this will stand until testing proves otherwise.

Tracking object allocation contention

Prior to SQL Server 2019, the best approach was to monitor the sys.dm_os_waiting_tasks dynamic management view and log the contention history over time.

Consider the SQL Server statement listed below:

text, letter

When looking at the wait resource, you can monitor the contention knowing that the first number refers to the database, the second number is the file id, and the last number is the page type.

This means that contention on wait resource 2:7:2 is tempdb contention as tempdb database is always database id 2, there is contention on file id #7 with GAM contention as the figure illustrates (page #1 is the PFS, #2 is the GAM, and #3 is SGAM).

These wait resource references are commonly in the format 2:1:1, 2:1:3, and so on.

Any results found on database id 2 indicate that there are requests waiting for tempdb resources and the accumulation of these requests can help database administrators narrow down the root cause of the contention.

In SQL Server 2019 we created new functions to improve tempdb troubleshooting. The sys.fn_PageResCracker dynamic management function returns the db_id, file_id, and page_id for the given page_resource value and sys.dm_db_page_info dynamic management function returns page information like page_id, file_id, index_id, object_id, and more that are present in a page header. This information is useful for troubleshooting and debugging various performance (lock and latch contention) and corruption issues.

The example query below can be used to better resolve wait resource information for any SQL Server release post SQL Server 2019:

text, letter

What is metadata contention?

The other main type of contention is called metadata contention. This type of contention is not about I/O. This contention occurs in memory when multiple threads are trying to modify the same page in memory at the same time.

You can track metadata contention using the same methods you would use to track object allocation contention, the difference is instead of the wait resource being 2:1:1, 2:1:2, 2:1:3 on the PFS, GAM, and SGAM, you are more likely to see the contention occurring on index and data pages and the page number in the wait resource will be a higher value such as 2:1:111, 2:1:118, or 2:1:122, for example.

For metadata contention it is useful to make note of page numbers greater than single digits, track the object name, and the page type description. The object names will show as system tables such as sysallocunits, syscolpars, sysjobactivity, sysscalartypes, sysschobjs, and so on.

Metadata contention was addressed in SQL Server 2019 with the memory-optimized tempdb metadata improvement.

Memory-optimized metadata tables for tempdb is basically a combination of the in-memory OLTP and the temp table metadata features. We took the system tables and the tempdb system tables—and we moved those into non-durable memory-optimized tables.

Remember, that tempdb is temporary—it gets dropped and recreated every time you restart SQL Server so there was no reason for the metadata to be durable. We converted the 12 system tables that are involved in object tracking and tempdb into memory optimized non-durable tables.

We don’t need a specialized memory-optimized file group for tempdb since it is non-durable anyway. All of it is “in memory”—no disk is needed and with memory-optimized tables there’s no latching and no locking. We can massively increase the concurrency against these metadata tables using these lock-free, latch-free data structures.

Enabling memory-optimized metadata tables does require a restart since we must configure the Hekaton DLLs. This was a big improvement in SQL Server 2019 that will eliminate a lot of the metadata contention.

Learn more about how this improvement removes the metadata contention bottleneck for tempdb-heavy workloads in our memory-optimized tempdb documentation.

Temp table cache contention

In past SQL Server releases starting with SQL Server 2005, we introduced temp table caching to get around some of the metadata caching contention. Basically, when you cache a temp table object—when you delete that table SQL Server doesn’t actually drop the metadata—we keep a cache of all the temporary objects that are used through a stored procedure and then we reuse the metadata for those objects when you call the same stored procedure with the same temp table again.

As a result, temp table caching has fewer hits to the metadata and alleviates some of that metadata contention—but not completely.

Temp table caching helped address metadata contention by allowing us to reuse tables that didn’t change between stored procedure executions. As long as the table was not altered after it was created, it would be eligible to be reused by another execution of the same stored procedure.  However, if the table is altered (by adding an index or a column, for example), then it can’t be reused and must be dropped when the stored procedure completes.

There are several different tables that we need to delete metadata from in order to completely drop the table, and this was all being done synchronously at the end of the stored procedure execution. Additionally, every time a new feature is added to SQL Server (ColumnStore indexes, temporal tables, In Memory OLTP, etc.) all these new features require new metadata to be tracked, therefore the number of system tables we need to delete from is increasing, which makes the process more impactful.

Temp table cache contention can be more prominent on larger SQL Server environments, larger core counts and as the size of the cache and the number of concurrent threads accessing the cache grows, this can introduce slower cache access as well as contention for the memory object associated with the cache.

This condition can manifest in two different ways: CMEMTHREAD waits and SOS_CACHESTORE spinlock waits. To address temp table cache contention, it is recommended to track these wait conditions for evidence and ensure you have installed the latest cumulative updates (CUs) for SQL Server.

Check out our Tech Community blog for more information on temp table caching in SQL Server.

SQL Server 2022 tempdb improvements

In SQL Server 2022 we have addressed the final common areas of contention by introducing concurrent GAM and SGAM updates similar to the Concurrent PFS updates.

We use the Global Allocation Map (GAM) pages when we are looking for uniform extents and the Shared Global Allocation Map (SGAM) pages when we are looking for mixed extents in tempdb.

In previous releases, under higher concurrent workloads we may have GAM contention where many different threads attempt to allocate extents on the same GAM page and each thread must first wait for another thread to release their UPDATE latch before they can obtain their own latch to allow them to make changes—so, we are just waiting in line.

As can be seen in the workload example below, on SQL Server 2019 there is a wall of GAM contention driving over 123,000 counts of contention with the longest wait taking 949 milliseconds.

Dashboard showing the performance of SQL Server 2019 and 123,000 latch contentions over the last five minutes.

The reason for this is that with the update latch, only one thread can modify the GAM page at a time, leading to contention. This is the primary reason we still need multiple data files and because of this contention, SQL Server throughput is decreased and workloads that require many updates to the GAM page will take longer to complete while the machine’s CPU will be underutilized. This contention is due to the workload volume and especially the use of repetitive create-and-drop operations.

Starting with SQL Server 2016, we changed the default behavior to always allocate uniform extents when creating new objects in tempdb. This helped avoid most of the SGAM contention, but we still use mixed extents for Index Allocation Map (IAM) page allocations. IAM pages are used to track all the pages that belong to an object, so every object that gets created has at least one IAM page. For most workloads, these IAM page allocations don’t cause any issues, but for extremely busy tempdb workloads with many threads of concurrent allocations, these IAM page allocations can still cause SGAM contention.

SQL Server 2022 addresses GAM and SGAM contention

SQL Server tempdb contention is near completely addressed in SQL Server 2022 and these benefits are on by default. With these improvements in SQL Server 2022 we allow concurrent updates to the GAM and SGAM under a shared latch rather than using the update latch. This improvement erases nearly all tempdb contention allowing parallel threads to be able to modify the GAM and SGAM pages as can be seen in the example below.

Dashboard showing a similar workload run on SQL Server 2022 with only 607 points of contention over the same period of time.

In the SQL Server 2022 workload example shown here, we only have 607 points of contention over the same time period compared to SQL Server 2019 with the longest wait at only 342 milliseconds. The only contention in the environment was metadata contention in this example because we did not enable the SQL Server optimized tempdb metadata improvement.

There are still possible points of metadata contention, but with SQL Server 2022, the points of contention will be rare and should not lead to any significant performance challenges.

If concurrent GAM and concurrent SGAM updates are some of the last areas of contention, do we still need the best practices to maintain multiple data files for tempdb?

Out of the gate, we are going to continue recommending the same best practices, but we may adjust if we find that it is no longer required through customer feedback.

Summary

In SQL Server 2022 we have improved tempdb performance to a factor that may need to revise the tempdb best practices that have stood true for nearly a quarter of a century.

We have greatly improved the performance of tempdb. So much of what runs on SQL Server relies on tempdb, these enhancements will likely be more than enough to make SQL Server 2022 a mandatory upgrade in most organizations.

The key point is that it’s important for DBAs to optimize tempdb performance, it’s important to track and address tempdb bottlenecks, and SQL Server has improved tempdb in every single release—SQL Server 2022 is no exception.

Next steps

System page latch concurrency enhancements in SQL Server 2022 are just one of the many benefits of migrating to SQL Server 2022.

Download the latest release of SQL Server 2022 if you haven’t already done so and check out the SQL Server 2022 Overview and What’s New references. There are many new features and improved functionality being added to this release.

Learn more

For more information and to get started, check out the following references:

Read What’s New in SQL Server 2022

Watch the Data Exposed SQL Server 2022 overview video: SQL Server 2022 Storage Engine Capabilities (Ep. 6) | Data Exposed

Additional useful resources:

Dynamic Management View (DMVs):

The post Improve scalability with system page latch concurrency enhancements in SQL Server 2022 appeared first on Microsoft SQL Server Blog.

]]>
Microsoft Azure at Data Platform Virtual Summit 2020 http://approjects.co.za/?big=en-us/sql-server/blog/2020/11/19/microsoft-azure-at-data-platform-virtual-summit-2020/ Thu, 19 Nov 2020 17:00:48 +0000 Data Platform Virtual Summit 2020 (DPS 2020) is just a couple of weeks away.

The post Microsoft Azure at Data Platform Virtual Summit 2020 appeared first on Microsoft SQL Server Blog.

]]>
Data Platform Virtual Summit 2020 (DPS 2020) is just a couple of weeks away. A global learning event for data professionals, DPS 2020 features a keynote from Rohan Kumar, Microsoft Corporate Vice President of Azure Data, as well as 200 breakout sessions and 30 training classes delivered by Azure Data engineering, partner organizations, and community leaders. With content delivered around-the-clock, DPS 2020 empowers Azure Data professionals worldwide with the deep technical skills they need to move ahead in their careers and digitally transform their organizations.

This year, DPS 2020 features five parallel tracks focusing on Azure Data:

  • Advanced Analytics
  • Artificial Intelligence
  • Azure Data Administration
  • Azure Data Development
  • Power BI

The virtual platform offers live Q and A, a networking lounge, a community zone, and technical round tables. Additionally, attendees will receive 12 month on-demand access to session recordings.

DPS 2020 offers an incredible opportunity to learn directly from our engineering teams, who will share the latest advances and insights on the Azure Data platform.

  • Rohan Kumar will deliver the keynote. Rohan will highlight the latest innovations across the Microsoft Azure Data platform and share customer case studies. The keynote will also feature demos from multiple Microsoft engineers, including Anitha Adusumilli, Anna Hoffman, Buck Woody, Travis Wright, and Vasiya Krishnan.
  • Microsoft Azure Data engineering teams will deliver over 35 sessions at DPS 2020. Hear the latest from the people who develop the tools you use every day, and engage in live discussions.
  • Visit the virtual expo hall where you can connect with our team across SQL Server, Azure SQL, Azure Synapse Analytics, Power BI, and more.

Register now for a week of training at Data Platform Virtual Summit and receive twelve months of on-demand access to the DPS 2020 sessions.

The post Microsoft Azure at Data Platform Virtual Summit 2020 appeared first on Microsoft SQL Server Blog.

]]>