SQL Server Updates - Microsoft SQL Server Blog http://approjects.co.za/?big=en-us/sql-server/blog/content-type/updates/ Official News from Microsoft’s Information Platform Thu, 12 Sep 2024 15:22:02 +0000 en-US hourly 1 http://approjects.co.za/?big=en-us/sql-server/blog/wp-content/uploads/2018/08/cropped-cropped-microsoft_logo_element-150x150.png SQL Server Updates - Microsoft SQL Server Blog http://approjects.co.za/?big=en-us/sql-server/blog/content-type/updates/ 32 32 Announcing the retirement of SQL Server Stretch Database http://approjects.co.za/?big=en-us/sql-server/blog/2024/07/03/announcing-the-retirement-of-sql-server-stretch-database/ Wed, 03 Jul 2024 16:00:00 +0000 In July 2024, SQL Server Stretch Database will be discontinued for SQL Server 2022, 2019, and 2017.

The post Announcing the retirement of SQL Server Stretch Database appeared first on Microsoft SQL Server Blog.

]]>
Ever since Microsoft introduced SQL Server Stretch Database in 2016, our guiding principles for such hybrid data storage solutions have always been affordability, security, and native Azure integration. Customers have indicated that they want to reduce maintenance and storage costs for on-premises data, with options to scale up or down as needed, greater peace of mind from advanced security features such as Always Encrypted and row-level security, and they seek to unlock value from warm and cold data stretched to the cloud using Microsoft Azure analytics services.     

During recent years, Azure has undergone significant evolution, marked by groundbreaking innovations like Microsoft Fabric and Azure Data Lake Storage. As we continue this journey, it remains imperative to keep evolving our approach on hybrid data storage, ensuring optimal empowerment for our SQL Server customers in leveraging the best from Azure.

Retirement of SQL Server Stretch Database 

On November 16, 2022, the SQL Server Stretch Database feature was deprecated from SQL Server 2022. For in-market versions of SQL Server 2019 and 2017, we had added an improvement that allowed the Stretch Database feature to stretch a table to an Azure SQL Database. Effective July 9, 2024, the supporting Azure service, known as SQL Server Stretch Database edition, is retired. Impacted versions of SQL Server include SQL Server 2022, 2019, 2017, and 2016.  

In July 2024, SQL Server Stretch Database will be discontinued for SQL Server 2022, 2019, 2017, and 2016. We understand that retiring an Azure service may impact your current workload and use of Stretch Database. Therefore, we kindly request that you either migrate to Azure or bring their data back from Azure to your on-premises version of SQL Server. Additionally, if you’re exploring alternatives for archiving data to cold and warm storage in the cloud, we’ve introduced significant new capabilities in SQL Server 2022, leveraging its data virtualization suite. 

The path forward 

SQL Server 2022 supports a concept named CREATE EXTERNAL TABLE AS SELECT (CETaS). It can help customers archive and store cold data to Azure Storage. The data will be stored in an open source file format named Parquet. It operates well with complex data in large volumes. With its performant data compression, it turns out to be one of the most cost-effective data storage solutions. Using OneLake shortcuts, customers then can leverage Microsoft Fabric to realize cloud-scale analytics on archived data.  

Our priority is to empower our SQL Server customers with the tools and services that leverage the latest and greatest from Azure. If you need assistance in exploring how Microsoft can best empower your hybrid data archiving needs, please contact us.

New solution FAQs

What’s CETaS? 

Creates an external table and then exports, in parallel, the results of a Transact-SQL SELECT statement. 

  • Azure Synapse Analytics and Analytics Platform System support Hadoop or Azure Blob Storage.
  • SQL Server 2022 (16.x) and later versions support CETaS to create an external table and then export, in parallel, the result of a Transact-SQL SELECT statement to Azure Data Lake Storage Gen2, Azure Storage Account v2, and S3-compatible object storage. 

What is Fabric? 

Fabric is an end-to-end analytics and data platform designed for enterprises that require a unified solution. It encompasses data movement, processing, ingestion, transformation, real-time event routing, and report building. Fabric offers a comprehensive suite of services including Data engineering, Data Factory, Data Science, Real-Time Analytics, Data Warehouse, and Databases. 

With Fabric, you don’t need to assemble different services from multiple vendors. Instead, it offers a seamlessly integrated, user-friendly platform that simplifies your analytics requirements. Operating on a software as a service (SaaS) model, Fabric brings simplicity and integration to your solutions. 

Fabric integrates separate components into a cohesive stack. Instead of relying on different databases or data warehouses, you can centralize data storage with Microsoft OneLake. AI capabilities are seamlessly embedded within Fabric, eliminating the need for manual integration. With Fabric, you can easily transition your raw data into actionable insights for business users. 

What is OneLake shortcuts?  

Shortcuts in OneLake allow you to unify your data across domains, clouds, and accounts by creating a single virtual data lake for your entire enterprise. All Fabric experiences and analytical engines can directly connect to your existing data sources such as Azure, Amazon Web Services (AWS), and OneLake through a unified namespace. OneLake manages all permissions and credentials, so you don’t need to separately configure each Fabric workload to connect to each data source. Additionally, you can use shortcuts to eliminate edge copies of data and reduce process latency associated with data copies and staging. 

Shortcuts are objects in OneLake that point to other storage locations. The location can be internal or external to OneLake. The location that a shortcut points to is known as the target path of the shortcut. The location where the shortcut appears is known as the shortcut path. Shortcuts appear as folders in OneLake and any workload or service that has access to OneLake can use them. Shortcuts behave like symbolic links. They’re an independent object from the target. If you delete a shortcut, the target remains unaffected. If you move, rename, or delete a target path, the shortcut can break. 

Learn more 

Abstract image

Microsoft Fabric

Bring your data into the era of AI

The post Announcing the retirement of SQL Server Stretch Database appeared first on Microsoft SQL Server Blog.

]]>
Getting started with delivering generative AI capabilities in SQL Server and Azure SQL http://approjects.co.za/?big=en-us/sql-server/blog/2024/06/26/getting-started-with-delivering-generative-ai-capabilities-in-sql-server-and-azure-sql/ Wed, 26 Jun 2024 15:00:00 +0000 Microsoft SQL Server and Azure SQL is the data platform to power today’s modern applications with security, performance, and availability.

The post Getting started with delivering generative AI capabilities in SQL Server and Azure SQL appeared first on Microsoft SQL Server Blog.

]]>
AI is transforming everything we do, including how we interact with data. Data is the fuel for AI. Microsoft SQL Server and Azure SQL is the data platform to power today’s modern applications with security, performance, and availability, but also have capabilities and support scenarios required in the era of AI.

Azure SQL and SQL Server support building new generative AI experiences that become supercharged when combined with your data. In addition, SQL brings AI assistance to a new level with copilot experiences for both self-help and natural language to SQL capabilities.

In this blog post, I’ll share how you can get started with these new AI experiences—Azure SQL and SQL Server. First, check out our latest story on Microsoft Mechanics:

Use AI with your SQL Data infographic with Large Language Model on left, SQL graphic in the middle, Copilot logo on the right, and Retrieval Augmented Generation named below.

Responsible AI

Many conversations about AI starts with a statement on responsible AI. Microsoft has established a set of policies, research, engineering efforts, and principles to ensure AI technologies are adopted, implemented, and used in a responsible manner.

These principles include fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Your data is your data. One promise for Microsoft is that private data of any user, including prompts and responses, are never used to fine tune a model that Microsoft hosts or implements.

Generative AI applications with your data

One of the motivations for generative AI applications is to become more productive, creative, and efficient through the generation of content in all forms: text, audio, and video. Many of today’s examples for generative AI applications involve the user of a natural language prompt and the interaction with a language model. Many of you have probably at some point used an application like ChatGPT or Microsoft Copilot which are great examples of generative AI applications.

Get smarter with your data

While these are great applications, they don’t know about your data. The combination of a generative AI application with your data, for example, stored in a database, can be quite powerful. Generative AI provides methods for smarter searching on your data. A common application pattern is to use language models with a prompt application to “chat with your data.” Using the concept of vector embeddings, language models allow you to get more precision on questions about your data. In addition, responses to questions are more tailored to your users and searches can often be faster because language models allow you to use the power of natural language. Generative AI applications with your data provide unique intelligence in an interactive manner, including conversations. Language models are trained to provide more context on your search, often giving you more (hence generated) content than you might normally get using common searching techniques within a database engine with a language like SQL.

As you investigate how you can take advantage of generative AI with language models, there are two important concepts to understand:

Prompt engineering is the discipline of using high quality and descriptive prompts when interacting with a language model. The concept is simple. The better the prompt, the likelihood of a better response from the model. For example, let’s say you use Microsoft Copilot and type in a prompt like “What are the best steak restaurants in Fort Worth, Texas?” You will get a good list of steak houses in Fort Worth, Texas based on a search by Copilot of rankings across a broad set of searches. But what if you are on a bit of a tight budget? Instead of looking at the results from the prompt and trying to figure out what prices you can afford you could instead ask “I’m on a tight budget but want to eat at a good steakhouse in Fort Worth, Texas.” Now your results are more tailored for what you really want. And since you are interacting with a language model, it understands the phrase “tight budget” means you need choices that are good but affordable.

While this technique can be great if you are interacting with a model that is trained to help you search the internet, what about your own data? One prompt engineering technique to get smarter with your data is called Retrieval Augmented Generation (RAG). The concept of RAG is to search for information from a source of data and use those results to augment the prompt to the model. For Azure SQL and SQL Server, this could mean using standard SQL techniques to search for data using Transact-SQL (T-SQL), taking these results, and sending them along with the original prompt to the language model. This technique is simple and can be an effective way to get smarter with your data, and this can work with almost any type of data you search, not just SQL databases.

For Azure SQL and SQL Server, a more sophisticated technique is called hybrid search. With hybrid search, you can use the power of vector search combined with the query capabilities of your SQL data. Vector embeddings are numerical representations of data that capture semantic meaning and similarities. The key to embeddings with language models is that the model can generate embeddings based on data like text. This means you can take text data inside your SQL database and use a model to generate embeddings and then store these embeddings in your database. Now anytime you want to search for data inside the database, you can send a prompt to a language model which will generate embeddings for the prompt. And then you can use vector search techniques to compare the embeddings from the prompt with the embeddings stored in your database. You can then combine the vector search with other techniques you would normally use in T-SQL to find data in your database: a hybrid search.

There are methods today to use hybrid search completely inside the engine using T-SQL and outside the engine using Microsoft Azure AI Services or frameworks like LangChain or Semantic Kernel.

Get started quickly with Azure AI Services

One approach to get started quickly with no code required is to index your SQL database using Azure AI Search and then use Azure OpenAI Service to build a simple prompt app and “chat with your data” using a hybrid search technique.

You can use Azure AI Search to build an index based on a table in your SQL Server or Azure SQL database. When you build the index, you can apply a skillset to generate embeddings based on your data and store the result in the index. Now you can use Azure OpenAI with a prompt application to perform hybrid searches on your data. One example prompt application to perform simple testing is to use Azure AI Studio. In addition, as you change your SQL data, the index is automatically updated including the embeddings. The figure below shows the basic flow:

Use Azure AI Services with your SQL data flow chart

You can see this in action from the latest Microsoft Mechanics video or download a deck with demo recordings. One of the interesting aspects of this example is the method of changing the system message to direct the language model to respond in a unique way using the same data. This is also a great example of prompt engineering.

Learn more about Azure SQL in Azure AI Search.

Use hybrid search inside the engine with T-SQL

Let’s say instead of using a separate index, you would like to build generative AI capabilities for your application all inside the engine using T-SQL. You can do this in a very powerful way for Azure SQL Database today using a combination of vector embeddings, vector search, and other T-SQL search methods. This is a true hybrid search because you are using all the power of the SQL query processor together with a vector search. An example my colleague Davide Mauri has developed uses these techniques to help him find the best restaurant for one of this favorite Italian foods, focaccia bread.

Davide built an application that stores reviews from restaurants in the form of vector embeddings using Azure OpenAI Service with Azure SQL Database Representational State Transfer (REST) API inside the engine. With this in place, he can take any prompt to search for the best focaccia bread and use the same technique to generate embeddings for the prompt. Then, he can use a new T-SQL vector_distance function to perform a similarity search. The true power of SQL is possible because Davide built queries to combine this vector search with other criteria from spatial types, the new JSON data type, and the new Regular Expression (RegEx) T-SQL capabilities.

You can see a diagram of how these techniques are combined together below:

Hybrid search with Azure SQL example

You can see this demo in action in our Microsoft Mechanics video or download a deck with demo recordings. You can learn more about the new JSON data type (preview). You can also sign-up to preview the new vector search capabilities and RegEx in Azure SQL Database.

Building generative AI applications using frameworks

There are other methods to build generative AI applications with Azure SQL and SQL Server using frameworks such as:

  • LangChain:
    LangChain is an open-source framework to orchestrate AI applications with language models. You can use programming languages such as Python and JavaScript to build your own generative AI application. LangChain supports the SQL Agent Toolkit which allows you to interact with a SQL database using natural language prompts. The toolkit integrates the connection to your database with a language model to generate SQL queries based on natural language prompts. You can see an example of this in the blog post “Building your own DB Copilot for Azure SQL with Azure OpenAI GPT-4.”
  • Semantic Kernel:
    Semantic Kernel is an open-source SDK to allow you to build AI applications in C#, Python, and Java, interfacing with many common models in the industry such as OpenAI, Azure OpenAI, and Hugging Face. A library has been built to allow a Semantic Kernel application to interact with Azure SQL Database (and use the new vector search capability) called the SQL Connector.

See a full range of SQL and generative AI examples.

The age of copilots

Microsoft has transformed the industry and how we work and live with a new set of AI assisted experiences called Microsoft Copilot. Copilots are AI companions that work everywhere you do and intelligently adapt to your needs.

Use Copilots where you live

I realize there seem to be copilots everywhere. It is hard to keep track. Microsoft is investing in Copilot experiences in almost every product or service. Use the product or service you normally do and see what Copilot can offer. For example, if you have Microsoft 365, use Copilot for Microsoft 365 naturally within Microsoft Teams or any Office product or service. I personally use Microsoft Copilot in my Edge browser or on the app on my phone for any search experience I need today—web or work related.

Microsoft Copilot in Azure

The primary resource to manage and explore Microsoft Azure is the Azure portal. You can now use Microsoft Copilot in Azure within the Azure portal to manage, deploy, and troubleshoot Azure resources. Azure SQL Database is one of the most popular Azure resources in the world, so we have built two distinct experiences within the Copilot in Azure framework using natural language for self-guided assistance and T-SQL query authoring:

Microsoft Copilot in Azure integration

One of the strengths of SQL Server is the deep built-in telemetry within the engine all accessible through T-SQL. This includes Dynamic Management Views (DMV) and Query Store. These rich, traditional capabilities shine through now in Copilot. For example, you can prompt with Copilot a general statement like “My database is slow” and Copilot, based on your permissions, will access real-time diagnostic data, in the context of your database, to help you quickly navigate difficult, and often vague, performance problems. Here is an example:

Screenshot of an example of using Copilot for SQL to troubleshoot performance

You can then continue a conversation with Copilot to tune the query causing the problem. There are many different skills that Copilot can help you all in the context of your database. Learn about all the possibilities of Copilot skills in Azure SQL Database (preview).

Natural language to SQL

The T-SQL query language has so many great capabilities and possibilities. But the open nature of T-SQL also leads to difficulties in crafting queries to meet the need of your application. Along comes a copilot experience to allow you to “chat” with your database using natural language in the context of your database and schema: table, columns, and key relationships. A simple example is being able use a natural language statement to generate a query that typically requires several joins over multiple tables like the following:

Screenshot of dashboard authoring SQL queries using Natural Language

Learn more how to use natural language to SQL.

You can see both experiences in action in our Microsoft Mechanics video or download a deck with demo recordings.

Innovations moving forward

We are just beginning with SQL and AI. We have innovations for the future planned for enhancements with AI services, enhancements for deep integration for vector search, and enhanced Copilot experiences for SQL Server. Stay tuned for future blog posts showing all of these innovations.

Learn more today

Here are more resources for you to learn more about SQL and AI:

a man sitting at a table using a laptop

Azure SQL

Migrate, modernize, and innovate with the modern SQL family of cloud database services

The post Getting started with delivering generative AI capabilities in SQL Server and Azure SQL appeared first on Microsoft SQL Server Blog.

]]>
Provision Premium SSD v2 Storage for Microsoft SQL Server on Azure Virtual Machines in the Microsoft Azure portal http://approjects.co.za/?big=en-us/sql-server/blog/2024/04/01/provision-premium-ssd-v2-storage-for-microsoft-sql-server-on-azure-virtual-machines-in-the-microsoft-azure-portal/ Mon, 01 Apr 2024 15:00:00 +0000 We’re excited to announce the public preview of the Premium SSD v2 provisioning experience for SQL Server on Azure Virtual Machines.

The post Provision Premium SSD v2 Storage for Microsoft SQL Server on Azure Virtual Machines in the Microsoft Azure portal appeared first on Microsoft SQL Server Blog.

]]>
Guidance on choosing SQL Server storage options in Azure

We’re excited to announce the public preview of the Premium SSD v2 provisioning experience for SQL Server on Azure Virtual Machines (VMs) deployed in the Azure portal. Premium SSD v2 storage improves performance, reliability, and scalability of your SQL Server workloads while offering robust resource capacity, as you can create a single disk with up to 64 TiBs, 80,000 input/output per second (IOPS), and 1,200 MB/s throughput.

When building Azure SQL VMs in the cloud, DBAs have several storage choices they can consider to give their applications the required performance and capacity their workloads require. In Azure, DBAs have compute options along with affordable storage designed to handle mission critical SQL Server workloads. In this article, we’ll review these options focusing on the latest capabilities of Premium SSD v2 storage and the Ebds_v5 Azure VMs, which are better together—providing the best combination of price performance capabilities in the cloud for SQL Server workloads.

Person standing on stairs, looking at a phone in their hand

Premium SSD v2 storage in Azure portal

Deploy SQL Server on Azure VMs with Premium SSD v2 disks using the Azure portal.

Managed disk storage options

DBAs looking to move their mission critical SQL Server applications from on-premises to the cloud have the managed disk options of Premium SSD (Gen 1), Premium SSD v2, and Ultra Disk for their production workloads where Standard SSDs are often used for dev/test and smaller departmental workloads. The goal of this article is to review the Azure VM storage options for SQL Server and give DBAs the tools and information needed to weigh the possible trade-offs in Azure feature availability and overall costs.

Premium SSD (Gen 1)

For most DBAs looking to build scalable storage with optimal price-performance, they use Premium SSD (Gen 1) managed disks for their storage solutions. Using first generation Premium SSD disks, DBAs can provision their VMs by striping disks of usually Premium SSD P30 or P40 disks in a storage pool. A storage pool allows DBAs to multiply disks up to the VM’s limits providing scalability and maximizing price-performance, while Premium SSD (Gen 1) disks allow DBAs to take advantage of Azure reservations and read-only host-caching.

While Premium SSD (Gen 1) storage exhibits remarkable flexibility and operates with minimal Azure feature limitations in the cloud, along with offering price-performance options through reservations for select disk types, it cannot scale to meet the demands of the latest Ebdsv5 VM series.

Additional differences between Premium SSD and Premium SSD v2 include:

  • Premium SSD v2 offers higher disk capacity, up to 64 TiBs per disk, compared to 32 TiBs for Premium SSD.
  • Premium SSD v2 provides more flexibility and control over disk performance, as you can independently adjust the disk size, IOPS, and throughput according to your workload needs, while Premium SSD (Gen 1) has fixed performance tiers based on disk size.
  • Premium SSD v2 has lower latency and higher reliability than Premium SSD as it uses newer hardware and an improved storage platform.
  • Premium SSD v2 supports higher levels of bursting compared to the previous generation, which allows you to achieve additional performance when needed for short periods of time without additional cost.

Ultra Disk

DBAs can also use Ultra Disk to meet the storage demands of the latest generation of Azure VMs as they offer sub-millisecond latency and better performance than Premium SSD (Gen 1). Like Premium SSD v2, Ultra Disk also allows DBAs to dynamically configure and scale the IOPS, throughput, and capacity of their disks independently without having to restart the VM or change the disk size. This makes Ultra Disk an attractive option for data-intensive workloads such as SQL Server that require consistency and high performance with low latency.

However, Ultra Disk also has drawbacks that make it less suitable for some scenarios. For example, Ultra Disk is only available in a limited number of regions and has stricter requirements for VM sizes, zones, and proximity placement groups. Ultra Disk also does not support disk snapshots, disk encryption, Microsoft Azure Site Recovery, or host caching options. Moreover, Ultra Disk is much more expensive than both Premium SSD and Premium SSD v2, especially for larger disk sizes.

Comparing VM and storage deployments

The established guidance for SQL Server VM deployments was to use Premium SSD (Gen 1) in a storage pool configuration with read-only caching for the data files. For the transaction log, we advised using Ultra Disk in cases where DBAs needed lower latency and could handle the limitations. This recommendation was especially the case for our previous hero VM series such as the Edsv4-series which offered the best performance for OLTP workloads at the time.

However, VMs continued to improve and with Azure Boost and other hardware enhancements, the newest Ebs_v5 and Ebds_v5 VMs have proven to be the optimal VM series for SQL Server workloads. The newest Ebs_v5 and Ebds_v5 VMs power higher levels of IOPS and throughput, and now with NVMe storage interface support they can scale well beyond the capabilities of Premium SSD (Gen 1). The Ebs_v5 and Ebds_v5 VMs series and larger VMs on the horizon will require a higher level of storage performance than Premium SSD (Gen 1) was able to provide. A higher level of storage performance is needed to match the capabilities of our newest generation of Azure SQL VMs and to avoid being throttled/capped when your application is pushing higher levels of IOPS/ throughput.

The next generation of Azure VMs will further push well beyond the largest storage needs of our current generation. For example, the largest machine in the previous generation Edsv4-series is the E80ids_v4 which is an Azure SQL VM of 80 vCores, 504 GiBs memory, 80,000 max uncached IOPS, and 1,500 MBps max uncached disk throughput. For a machine of this size, a Premium SSD storage pool would require 16 x P30 disks to provide the same number of IOPS that a single Premium SSD v2 disk could achieve, but with improved latency and less overall cost.

In comparison, the Ebds_v5 series has a VM size of E112ibds_v5 that supports 400,000 max uncached IOPS and 10,000 MBps max uncached disk throughput (Ultra/Pv2-SSD). Premium SSD (Gen 1) would require 80 disks in order to match the IOPs capabilities of this VM, which would exceed the max data disk limit of the VM.

Premium SSD v2 only needs five disks in a storage pool and additionally allows adjusting the IOPS and throughput based on needs for a better overall total cost of ownership (TCO).

Azure SQL VM best practices

Premium SSD v2 has more flexibility than Premium SSDs (Gen 1) and Ultra Disk. You can choose any supported size for a Premium SSD v2 and change the performance parameters without interruption. Premium SSD v2 does not have host caching, but it has much lower latency, which helps with some of the same issues that host caching helps with. The ability to modify IOPS, throughput, and size on demand means you can reduce the management workload of having to combine disks to meet your needs.

To get started, when provisioning a new SQL Server on Azure VM in the Azure portal, you can choose Premium SSD v2 for eligible VMs:

Provisioning a new SQL Server on Azure VM

Premium SSD v2 allows you to change disk size, IOPS, and throughput independently to reach your performance targets, making workloads more cost efficient while also adapting to changing performance needs.

With a capacity of 64 TiBs, 80,000 IOPS, and 1,200 MBps of throughput, most environments can benefit from the performance capabilities of a single Premium SSD v2 disk—but for our largest Azure VMs, Premium SSD v2 disks can be combined into a storage pool to provide the performance required for a single logical drive.

When deploying your SQL Server VM image in the Azure portal, Premium SSD v2 is available for the Ebds_v5 and Ebs_v5 Azure VM series which are optimized for high-performance database workloads.

Configure Storage

The following table helps visualize some of the performance gains and cost savings when using Premium SSD v2 with your Ebds_v5 and Ebs_v5 VMs:

Ebdsv5 and Premium SSDv2 together
* This cost is for pay-as-you-go compute only, assuming Azure Hybrid Benefit for both Windows OS and SQL Server licensing costs.
1The HammerDB TPC-C workload is derived from the TPC-C Benchmark and is not comparable to published TPC-C Benchmark results, as the HammerDB TPC-C workload results do not fully comply with the TPC-C Benchmark.

Learn more about Premium SSD v2 storage for SQL Server on Azure VMs

In summary, Premium SSD v2 offers enhanced performance, granular scalability, and cost-effectiveness for applications demanding sub-millisecond disk response times. While it provides more capabilities, the actual cost difference between Premium SSD, Premium SSD v2, and Ultra Disks depends on factors such as region, disk size, IOPS, and throughput. You can use the Azure pricing calculator to estimate costs based on your specific needs.

If you are deploying SQL Server VMs using the Azure portal and want to utilize Premium SSD v2, note that it is currently limited to the Ebds_v5 or Ebs_v5 series VMs in this public preview phase.

We’re committed to providing our customers with the best possible experience when running SQL Server on Azure VMs. The addition of Premium SSD v2 storage is another step toward that goal.

Try out Premium SSD v2 storage for SQL Server on Azure VMs today and please share your feedback with us. We look forward to hearing from you as we continue to improve our offerings for SQL Server on Azure VMs.

To get started, check out Use Premium SSDv2 storage with your SQL Server on Azure VMs.

You can also keep an eye on the What’s new page for all the latest and greatest updates to SQL Server on Azure VMs and What’s new for Azure storage.

The post Provision Premium SSD v2 Storage for Microsoft SQL Server on Azure Virtual Machines in the Microsoft Azure portal appeared first on Microsoft SQL Server Blog.

]]>
SQL Server Integration Services (SSIS) Change Data Capture Attunity feature deprecations http://approjects.co.za/?big=en-us/sql-server/blog/2024/02/28/sql-server-integration-services-ssis-change-data-capture-attunity-feature-deprecations/ Wed, 28 Feb 2024 16:00:00 +0000 This blog provides details to help support customers in modernizing to new solutions well in advance of this change.

The post SQL Server Integration Services (SSIS) Change Data Capture Attunity feature deprecations appeared first on Microsoft SQL Server Blog.

]]>
In December 2025, Microsoft will discontinue support for the Change Data Capture (CDC) components by Attunity and Change Data Capture (CDC) service for Oracle by Attunity of SQL Server Integration Services (SSIS). This blog provides details to help support customers in modernizing to new solutions well in advance of this change. The following components for which support will be discontinued:

SQL Server Intergration Services

Learn More

Customers using these two features are encouraged to modernize to Data Factory in Microsoft Fabric or Azure Data Factory. Customers can use incremental data loading capability from Azure Data Factory. Azure Data Factory can be used for on-premises data sources with a self-hosted integration runtime and is fully compatible with all impacted versions of SQL Server.

Data Factory in Microsoft Fabric enables you to move and transform data from various sources to various destinations. It’s a managed cloud service designed specifically for handling complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.

If you need any assistance as you plan your CDC modernization please contact Microsoft Support.

Learn more about Data Factory in Microsoft Fabric and Azure Data Factory:

Frequently Asked Questions

What’s Data Factory in Microsoft Fabric?

Data Factory in Microsoft Fabric is the next generation of Azure Data Factory which provides cloud-scale data movement and data transformation services that allow you to solve the most complex ETL scenarios. It’s intended to make your experience easy to use, powerful, and truly enterprise-grade.  Data Factory empowers you with a modern data integration experience to ingest, prepare and transform data from a rich set of data sources (for example, databases, data warehouse, Lakehouse, real-time data, and more). Whether you are a citizen or professional developer, you will be able to transform the data with intelligent transformations and leverage a rich set of activities. With Data Factory in Microsoft Fabric, we are bringing fast copy (data movement) capabilities to both dataflows and data pipelines. With Fast Copy, you can move data between your favorite data stores blazing fast. Most importantly, Fast Copy enables you to bring data to your Lakehouse and Data Warehouse in Microsoft Fabric for analytics.

What’s Azure Data Factory?

Azure Data Factory is a cloud-based data integration service that allows you to create data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. It is a fully managed, serverless data integration solution for ingesting, preparing, and transforming all your data at scale. With Azure Data Factory, you can visually integrate data sources using more than 90 built-in, maintenance-free connectors. The service enables you to create and schedule data-driven workflows, called pipelines, that can ingest data from disparate data stores. You can build complex ETL processes that transform data visually with data flows or by using compute services such as Azure HDInsight Hadoop, Azure Databricks, and Azure SQL Database.

You can use Azure Data Factory to access and integrate data from on-premises data sources. One way to do this is by using a self-hosted integration runtime, which acts as a bridge between your on-premises data sources and the cloud-based Azure Data Factory service. This allows you to create data-driven workflows that can ingest data from your on-premises data stores and move it to the cloud for further processing and transformation.

How fast can I ingest data in Fabric data pipelines?

Fabric Data Factory allows you to develop pipelines that maximize data movement throughput for your environment. These pipelines fully utilize the following resources:

  • Network bandwidth between the source and destination data stores.
  • Source or destination data store input/output operations per second (IOPS) and bandwidth This full utilization means you can estimate the overall throughput by measuring the minimum throughput available with the following resources:
    • Source data store
    • Destination data store
  • Network bandwidth in between the source and destination data stores Meanwhile, we continuously work on innovations to boost the best possible throughput you can achieve. Today, the service can move 1 TB TPC-DI dataset (parquet files) into both Fabric Lakehouse table and Data Warehouse within five minutes—moving 1 billion rows under one minute; Please note that this performance is only a reference by running the above testing dataset. The actual throughput still depends on the factors listed previously. In addition, you can always multiply your throughput by running multiple copy activities in parallel. For example, using ForEach loop.

Where can I find more training resources to get started?


The post SQL Server Integration Services (SSIS) Change Data Capture Attunity feature deprecations appeared first on Microsoft SQL Server Blog.

]]>
Accelerated Database Recovery enhancements in SQL Server 2022  http://approjects.co.za/?big=en-us/sql-server/blog/2023/03/28/accelerated-database-recovery-enhancements-in-sql-server-2022/ Tue, 28 Mar 2023 15:00:00 +0000 We are excited to share that there are several Accelerated Database Recovery enhancements in SQL Server 2022.

The post Accelerated Database Recovery enhancements in SQL Server 2022  appeared first on Microsoft SQL Server Blog.

]]>
Part of the SQL Server 2022 blog series

We are excited to share that there are several Accelerated Database Recovery (ADR) enhancements in SQL Server 2022 that further improve the overall availability and scalability of the database, primarily around persistent version store (PVS) cleanup and management.

Overview of Accelerated Database Recovery (ADR) 

ADR improves database availability, especially in the presence of long running transactions, by redesigning the SQL database engine recovery process. ADR is introduced in SQL Server 2019 (15.x) and improved in SQL Server 2022 (16.x). 

ADR is also available for databases in Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse SQL. ADR is enabled by default in SQL Database and SQL Managed Instance and cannot be disabled. 

The primary benefits of Accelerated Database Recovery (ADR) are

Fast and consistent database recovery 

With ADR, long running transactions do not impact the overall recovery time, enabling fast and consistent database recovery irrespective of the number of active transactions in the system or their sizes. 

Instantaneous transaction rollback 

With ADR, transaction rollback is instantaneous, irrespective of the time that the transaction has been active or the number of updates that has performed. 

Aggressive log truncation 

With ADR, the transaction log is aggressively truncated, even in the presence of active long running transactions, which prevents it from growing out of control. 

ADR completely redesigns the database engine recovery process.

  • Make it constant time and instant by avoiding having to scan the log from and to the beginning of the oldest active transaction. With ADR, the transaction log is only processed from the last successful checkpoint (or oldest dirty page log sequence number (LSN)). As a result, recovery time is not impacted by long running transactions.
  • Minimize the required transaction log space since there is no longer a need to process the log for the whole transaction. As a result, the transaction log can be truncated aggressively as checkpoints and backups occur.

At a high level, ADR achieves fast database recovery by versioning all physical database modifications and only undoing logical operations, which are limited and can be undone almost instantly. Any transactions that were active at the time of a crash are marked as aborted and, therefore, any versions generated by these transactions can be ignored by concurrent user queries.

Note: For more details about ADR, please visit this page: and this video for a high-level overview of ADR and its components.

a man sitting at a table using a laptop

SQL Server 2022

Learn about the new features on security, platform, management, and more.

New ADR improvements in SQL Server 2022

Multi-threaded version cleanup

In SQL Server 2019 (15.x), the ADR cleanup process is single threaded within a SQL Server instance. Beginning with SQL Server 2022 (16.x), this process uses multi-threaded version cleanup (MTVC), that allows multiple databases under the same SQL Server instance to be cleaned in parallel.

MTVC is enabled by default in SQL Server 2022 and uses one thread per SQL instance. To adjust the number of threads for version cleanup, set ADR Cleaner Thread Count with sp_configure.

USE master;
GO
-- Enable show advanced option to see ADR Cleaner Thread Count
EXEC sp_configure 'show advanced option', '1';
-- List all advanced options
RECONFIGURE;
EXEC sp_configure; 
-- The following example sets the ADR Cleaner Thread Count to 4
EXEC sp_configure 'ADR Cleaner Thread Count', '4';
RECONFIGURE WITH OVERRIDE; 
-- Run RECONFIGURE to verify the number of threads allocated to ADR Version Cleaner.
RECONFIGURE;
EXEC sp_configure;

In the above example, if you configure the ADR Cleaner Count to be four on a sql instance with two databases, the ADR cleaner will allocate only one thread per database, leaving the remaining two threads idle.

Note: The maximum number of ADR Cleaner threads is capped at the number of cores used by the SQL Server instance. For example, if you are running SQL Server on an eight core machine, the maximum number of ADR cleaner threads that the engine can use will be eight, even if the value in the sp_configure is set to a greater value.

User transaction cleanup

This improvement allows user transactions to run cleanup on pages that could not be addressed by the regular cleanup process due to lock conflicts. This helps ensure that the ADR cleanup process works more efficiently.

Reducing memory footprint for PVS page tracker

This improvement tracks persisted version store (PVS) pages at the extent level, in order to reduce the memory footprint needed to maintain versioned pages.

Accelerated Data Recovery cleaner improvements

ADR cleaner has improved version cleanup efficiencies to improve how SQL Server tracks and records aborted versions of a page leading to improvements in memory and capacity.

Transaction-level persisted version store

This improvement allows ADR to clean up versions belonging to committed transactions independent of whether there are aborted transactions in the system. With this improvement PVS pages can be deallocated, even if the cleanup cannot complete a successful sweep to trim the aborted transaction map.

The result of this improvement is reduced PVS growth even if ADR cleanup is slow or fails.

New extended event

A new extended event, tx_mtvc2_sweep_stats, has been added for telemetry on the ADR PVS multi-threaded version cleaner.

Summary

In this blog post, we covered all the exciting ADR improvements that we are including with SQL Server 2022 that further improve the overall availability and scalability of your databases.

Side by side comparison graphs of recovery times after SQL restart with ADR
Figure 1: 5M rows bulk insert and recovery times after SQL restart with ADR on and off (side by side comparison)

Stay tuned as we are currently working on further improvements of the multi-threaded version cleaner that will enable parallelizing version cleanup within databases.

Learn more

For more information, and to get started with SQL Server 2022, check out the following references: 

Read What’s new in SQL Server 2022 for all the new features on security, platform, management, and more. 

The post Accelerated Database Recovery enhancements in SQL Server 2022  appeared first on Microsoft SQL Server Blog.

]]>
SQL Server 2022: Intel® QuickAssist Technology overview http://approjects.co.za/?big=en-us/sql-server/blog/2023/03/09/sql-server-2022-intel-quickassist-technology-overview/ Thu, 09 Mar 2023 19:00:00 +0000 SQL Server backup compression provides the compressed backup option for all streaming SQL Server backup types—full, differential, and transaction log.

The post SQL Server 2022: Intel® QuickAssist Technology overview appeared first on Microsoft SQL Server Blog.

]]>
Part of the SQL Server 2022 blog series

In SQL Server 2008 Enterprise Edition, backup compression was introduced adding WITH COMPRESSION as an option in the T-SQL backup command. SQL Server backup compression provides the compressed backup option for all streaming SQL Server backup types—full, differential, and transaction log. Using the metadata in the backup file header, the RESTORE command decompresses compressed backup files. Compressing backup data benefits customers in two ways: first, the backup files will consume less storage, and second, since there are fewer bytes being transferred, the backups run faster than uncompressed backups.
 
The default configuration in SQL Server is to not compress backups, but many users change this default globally using sp_configure.

While customers use the current compression standard, the challenge many customers have with the current compression implementation is that the compression ratios are frequently too low to justify the additional compute overhead required to compress the backup files.
 
Additionally, customers cannot offload the compute overhead of the compression task. Many customers in high-volume online transaction processing (OLTP) environments put their business continuity and disaster recovery (BCDR) strategy at risk by delaying their regular backup schedules in order to remain operational for customer workloads.

Intel® QuickAssist (Intel QAT) technology in SQL Server 2022, solves these challenges by providing hardware acceleration and CPU offload capabilities for enhanced compression and decompression functions. This feature is designed to give tangible improvements both in terms of reduced CPU usage and backup completion times, as well as a reduction in storage consumption.

The interface for Intel® QAT devices uses a software library called QATzip where the QATzip API maintains a built-in software fallback mechanism. This fallback mechanism allows administrators to hot-add/remove devices, better resist hardware failure scenarios, and provide software-based compression where needed.

The SQL Server 2022 feature provides integrated compression/decompression capabilities by using the QATzip library and the associated Intel® Intelligent Storage Acceleration Library (ISA-L). Intel® ISA-L is installed alongside QATzip where the presence and enablement of both the QATzip and Intel® ISA-L assemblies is a prerequisite for enabling instance-level server configuration for Intel® QAT hardware acceleration.

QATzip currently supports offloading backups to hardware devices, either through a peripheral component interconnect express (PCIe) adapter or as part of the host server chipset, and also supplying a software-based compression capability that can be used if there isn’t hardware available.

The SQL Server 2022 server-scope T-SQL syntax extends the current BACKUP DATABASE command to make it easy for database administrators to choose the desired algorithm to compress backups.

Additionally, the sp_configure configuration option is available to adjust backup compression defaults while introducing a new dynamic management view in order to query the configuration state.

The implementation of this feature surfaces appropriate error log and informational messaging to troubleshoot the Intel® QAT configuration as well as being able to track the benefits of the Intel® QAT compression feature over time by surfacing the compression algorithm history in the msdb database.

Portrait of man with a tie and plaid button down shirt sitting on a couch facing the camera, smiling with his hands folded.

SQL Server 2022

Learn about the new features on security, platform, management, and more.

Business cases for backup compression with Intel® QAT

The organizations that will find the most benefit from the Intel® QAT are those that have high needs around maintaining customer workloads, such as mission critical OLTP and online analytical processing (OLAP) environments, while also ensuring their organization’s data recovery strategy maintains the organizations service level agreements (SLAs). These organizations must maintain application workloads to meet customer expectations while ensuring the organization’s data recovery strategy is never put at risk.

Often these organizations are highly virtualized and are running SQL Server instances at, or very close to, 100 percent processing time to maximize hardware and software licensing investments.

Reference:
Online transaction processing (OLTP) – Microsoft Docs
Online analytical processing (OLAP) – Microsoft Docs

Intel® QAT driver overview

Customers should download the Intel drivers themselves directly from the Intel QuickAssist supportability page. The supportability page will give an overview of the Intel® QAT, list driver versions per operating system, and maintain the release notes for the Intel® QAT drivers.

Note: The official Intel® QAT Driver for Windows, which for the writing of this documentation is version 1.8.0-0010, is required to be installed separately from the SQL Server 2022 installation.

For the initial release of QAT-enabled backups for SQL Server 2022, the feature will be available for SQL Server 2022 on Windows Server only. As a host operating system, Windows Server 2016, Windows Server 2019, and Windows Server 2022 are supported.

When the drivers are installed and server-scope configuration option is enabled for SQL Server, it is expected that the QATzip library will be available in the C:\Windows\ system32\ path for QATzip and the Intel® ISA-L will be available in C:\Program Files\Intel\ISAL\* path.

This path is true for both hardware and software-only deployment scenarios.

Customers can install the drivers before the SQL Server 2022 installation in preparation for a disaster recovery site, for system standardization, or after the SQL Server 2022 installation, even without QAT hardware. In either case, it is required for customers to maintain the driver version. 

Preparing a disaster recovery server site

It’s possible to install the QATzip library on a server with no QAT hardware acceleration devices available (virtual or physical). In such cases, SQL Server will still load the QATZip.dll assembly provided the ALTER SERVER CONFIGURATION option for HARDWARE_OFFLOAD option is enabled for QAT.

In this scenario, users can specify the ALGORITHM for COMPRESSION as QAT_DEFLATE and rely on the software fallback provided by the Intel® ISA-L to perform the compression operation using the same format and encoding as provided by QAT devices.

Below are links to the currently supported Windows Server driver version, the Intel® QAT release notes, and the Intel® QAT landing page, which will be used as the primary location for Intel® QAT support.

Intel® QuickAssist Technology landing page
Intel® QuickAssist Technology Driver for Windows* (1.8.0-0010)
Intel® QuickAssist Technology Driver for Windows release notes 

SQL Server 2022 Edition support for Intel® QAT

SQL Server 2022 implements different levels of Intel® QAT support based on the SQL Server edition. The SQL Server 2022 Enterprise edition will use hardware offloading if an Intel QAT device is available, otherwise, it will fall back to software-based compression if the Intel QAT device is not available for any reason.

SQL Server 2022 Standard Edition will only allow Intel® QAT backups with software-only compression, even if an Intel QAT device is available.

SQL Server 2022 Express Edition will allow Intel® QAT backups to be restored but will only allow backups to be performed with the default MS_XPRESS algorithm.

Note: The Intel® QAT drivers are required to be installed in order to perform backups and restores using the Intel® QAT feature.

SQL Server 2022 configuration for Intel® QAT for backup compression

In order to back up databases with the Intel® QAT compression algorithm, it is required to have the drivers installed, and the SQL Server must then be permitted to load the drivers into the SQL Server 2022 process space.

To complete this task, a new option has been added to ALTER SERVER CONFIGURATION in SQL Server 2022 with the HARDWARE_OFFLOAD syntax to enable or disable the use of hardware acceleration for a SQL Server instance. With this configuration being applied, after a restart of the SQL Server service, the Intel® QAT hardware acceleration technology can be used to provide a new optional backup compression algorithm.
 
Note: The HARDWARE_OFFLOAD option requires a restart of the SQL Server instance to take effect.

After the feature is enabled, every time the SQL Server service starts, the SQL Server process will look for the required user space software library that interfaces with the hardware acceleration device driver API and will load the software assemblies if they are available. The Intel® QAT user space library is QATzip, which provides a number of features. The QATzip software library is a user space software API that can interface with the QAT kernel driver API. It is used primarily by applications that are looking to accelerate the compression and decompression of files using one or more QAT devices.

In the case of the Windows operating system, there is a complimentary software library to QATzip, the Intel® Intelligent Storage Acceleration Library (ISA-L). This serves as a software fallback mechanism for QATzip in the case of hardware failure as well as a software-based option when the hardware is not available.

Note: The unavailability of a QAT hardware device will not prevent SQL instances from performing backup or restore operations using the new QAT_DEFLATE algorithm.

Software compression or decompression using Intel® ISA-L still requires that the QATzip software library is installed and that HARDWARE_OFFLOAD server configuration has been enabled for QAT. This is because SQL Server loads the QATzip.dll assembly provided by QATzip and it handles the fallback to Intel® ISA-L.
 
Additionally, software-based compression will be available for SQL Server 2022 Standard Edition allowing Standard Edition customers the ability to take advantage of software-based accelerators where SQL Server Enterprise Edition 2022 customers will have the extended capability of offloading backups to an external device.

Enabling Intel® QAT backup compression

The ALTER SERVER CONFIGURATION statement will be used to enable or disable access to QAT devices. Using the HARDWARE_OFFLOAD statement, as shown below, will persist a bit on the SQL Server boot page to load QAT binaries at startup.

ALTER SERVER CONFIGURATION
 SET HARDWARE_OFFLOAD = ON (ACCELERATOR = QAT)

After executing the statement, users would then restart the SQL Server service to have the QAT binaries loaded into the SQL Server process space.

Note: The default configuration will be to use the Microsoft default MS_XPRESS compression capabilities.

The ALTER SERVER CONFIGURATION statement below will disable the Intel® QAT feature for SQL Server 2022.

ALTER SERVER CONFIGURATION
 SET HARDWARE_OFFLOAD = OFF (ACCELERATOR = QAT)

Hardware acceleration configuration and availability group replicas

The HARDWARE_OFFLOAD server-scope configuration option will be compatible with Availability Groups. If the setting is enabled on a primary replica, the configuration option will be a persistent server-scope property such that upon failover from a primary to a secondary replica, an attempt will be made to load the correct assemblies on the new primary replica. The recommendation will be to enable the server-scope setting on all replicas after installing the latest Intel® QAT drivers on each replica.

Verifying Intel® QAT accelerator configuration

In order to verify the Intel® QAT accelerator configuration, a number of methods can be used including a new dynamic management view (DMV), sp_configure and the sys.configurations catalog view, and the SQL Server 2022 error log.

sys.dm_server_hardware_offload_config

The SQL Server sys.dm_server_hardware_offload_config dynamic management view can be used to verify the Intel® QAT configuration state.

The addition of this dynamic management view will make it much easier to verify the SQL Server accelerators that are configured for use, and those that have been confirmed to be loaded. This dynamic management view (DMV) should be the primary method to verify the configuration state of the available accelerators.
 
Example:
SELECT * FROM sys.dm_server_hardware_offload_config

GO

Image of code: SELECT * FROM sys.dm_server_hardware_offload_config GO

sp_configure and the sys.configurations catalog view

The SQL Server backup compression default behavior can be adjusted. You can change the server default configuration as well as other options. You can enable or disable hardware acceleration, you can enable backup compression as a default, and you can also change the default compression algorithm as by using sp_configure.

The status of these options is reflected in the sys.configurations table.

As mentioned previously, you can discover the hardware offload configuration by querying the sys.dm_server_hardware_offload_config dynamic management view.

Backup compression default

To discover the backup compression default state, you can use the query below: 

SELECT * FROM sys.configurations

WHERE name = 'backup compression default';

Image of code: SELECT * FROM sys.configurations WHERE name = 'backup compression default'

Changing this configuration is permitted through the sys.sp_configure stored procedure.

For example:

EXEC sp_configure 'backup compression default', 1; 

RECONFIGURE;

No restart of SQL Server is required for this change to take effect. When this default option is changed to 1 (i.e., WITH COMPRESSION), MS_XPRESS will continue to be the default compression option, provided the backup compression algorithm has not been changed.

Backup compression algorithm

To discover the backup compression algorithm being used, you can use the query below:

SELECT * FROM sys.configurations 

WHERE name = 'backup compression algorithm';

Image of code:  SELECT * FROM sys.configurations    WHERE name = 'backup compression algorithm'

The “configure backup compression algorithm” configuration allows customers to change the default compression algorithm so they can determine if they want Intel® QAT to be the default compression algorithm for SQL Server.

Backup compression enabled

There will be a global sp_configure option to enable/disable hardware offloading.

Note: The value will be either 0 or 1 and will require a reboot to take effect.

sp_configure ‘hardware offload enabled’, 0

GO

Creating a backup using QAT compression

The T-SQL BACKUP command WITH COMPRESSION has been extended to allow for a specified backup compression algorithm. When using Intel® QAT for backup compression acceleration, the algorithm QAT_DEFLATE will initiate an Intel® QAT compressed backup if the drivers are available and the SQL Server configuration has been completed.

Note: The standard compression algorithm will be referred to as MS_EXPRESS and will remain the default compression option.

The ALGORITHM command will be used to specify either of these two algorithms (QAT_DEFLATE, MS_EXPRESS) for backup compression.

The example below will perform backup compression using Intel® QAT hardware acceleration.

BACKUP DATABASE testdb TO DISK='D:\Backups\testdb.bak'
WITH COMPRESSION (ALGORITHM = QAT_DEFLATE);

Either of these statements will use the default MS_XPRESS compression engine.

BACKUP DATABASE testdb TO DISK='D:\Backups\testdb.bak'
WITH COMPRESSION (ALGORITHM = MS_XPRESS);

BACKUP DATABASE testdb TO DISK='D:\Backups\testdb.bak'
WITH COMPRESSION;

The extension to the T-SQL syntax for backup compression provides the addition of the ALGORITHM option, allowing the options MS_XPRESS (default) or QAT_DEFLATE (Intel® QAT-based compression) in SQL Server 2022.

The table below gives a summary of the BACKUP DATABASE with COMPRESSION options in SQL Server 2022.

BACKUP STATEMENTOVERVIEW
BACKUP DATABASE <database_name> TO {DISK|TAPE|URL}Backup with no compression or with compression depending on default setting
BACKUP DATABASE <database_name> TO {DISK|TAPE|URL} WITH COMPRESSIONBackup using XPRESS compression algorithm
BACKUP DATABASE <database_name> TO {DISK|TAPE|URL} WITH COMPRESSION (ALGORITHM = MS_XPRESS)Backup with compression using XPRESS algorithm. There is an argument for permitting use of DEFAULT or NATIVE as permitted options.
BACKUP DATABASE <database_name> TO {DISK|TAPE|URL} WITH COMPRESSION (ALGORITHM=QAT_DEFLATE)Backup with compression using the QATZip library using QZ_DEFLATE_GZIP_EXT with compression level 1.

SQL Server 2022 RESTORE process

The RESTORE command does not include a COMPRESSION option as the backup header metadata specifies if the database is compressed and, therefore, the storage engine can restore from the backup file(s) accordingly. The backup metadata will be extended to include the compression type.

Running RESTORE HEADERONLY on a backup without compression and a backup compressed with the default MS_XPRESS algorithm will return result sets similar to the commands below:

RESTORE HEADERONLY
 FROM DISK=’C:\temp\QAT-DEFLATE.bak’
 GO RESTORE HEADERONLY
 FROM DISK=’C:\temp\MS-XPRESS.bak’
 GO

Image of ‘The RESTORE HEADERONLY command’ code

The RESTORE HEADERONLY command will display the compression algorithm along with other information such as the backup name, backup description, backup type, compression state, device type, and other columns. In SQL Server 2022, we will be adding the compression algorithm column.

Backups that have been compressed with either QAT_DEFLATE or MS_XPRESS, and uncompressed backups will all use the same T-SQL syntax RESTORE DATABASE DDL commands.

In the example below, the T-SQL restore syntax is valid for restoring from the testdb.bak backup file, regardless of whether it is an uncompressed backup, or a compressed backup created using QAT_DEFLATE or MS_XPRESS.

RESTORE DATABASE testdb FROM DISK=’E:\Backups\testdb.bak’
WITH RECOVERY;

SQL Server backups compressed using QAT_DEFLATE will support all T-SQL RESTORE operations. The RESTORE {DATABASE|LOG} statements for restoring and recovering databases from backups will support all recovery arguments, such as WITH MOVE, PARTIAL, STOPAT, KEEP REPLICATION, KEEP CDC, and RESTRICTED_USER.

Auxiliary RESTORE commands will also be supported for all backup compression algorithms. Auxiliary RESTORE commands include RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE VERIFYONLY, and more.

Note: If the server-scope configuration HARDWARE_OFFLOAD option is not enabled, and/or the Intel® QAT drivers have not been installed, an error will be surfaced instead of attempting to perform the restore.

To restore an Intel® QAT compressed backup, the correct assemblies must be loaded on the SQL Server instance initiating the restore operation.

Best practices

The use of a BLOCKSIZE of 65536 (64KB) will be recommended, but options such as BUFFERCOUNT and MAXTRANSFERSIZE will continue to remain dependent on the user’s environment if changes from the default are being made.

Learn more

For more information, and to get started with SQL Server 2022, check out the following references:

Read What’s new in SQL Server 2022 for all the new features on security, platform, management, and more.

Read the Microsoft Integrated acceleration and offloading guidance to get started with Intel® QuickAssist technology in SQL Server 2022.

Go to the Intel® QAT portal for the latest Intel information on Intel® QAT device drivers and application support along with technical documentation and whitepapers.

The post SQL Server 2022: Intel® QuickAssist Technology overview appeared first on Microsoft SQL Server Blog.

]]>
Easily migrate SQL Server applications to the cloud with Azure http://approjects.co.za/?big=en-us/sql-server/blog/2023/02/15/easily-migrate-sql-server-applications-to-the-cloud-with-azure/ Wed, 15 Feb 2023 16:00:00 +0000 Microsoft Azure knows SQL Server best and will support your cloud migration at no cost for qualifying scenarios.

The post Easily migrate SQL Server applications to the cloud with Azure appeared first on Microsoft SQL Server Blog.

]]>
Microsoft Azure knows SQL Server best and will support your cloud migration at no cost for qualifying scenarios.

The SQL Server platform has been successfully driving transformative business results with Microsoft customers across industries for over 25 years, enabling breakthrough innovation through unique capabilities like industry-leading performance1 and security built-in.

Microsoft’s continuous investments in SQL Server enable more flexibility with options for companies looking to simply save on costs, or to really transform their current SQL Server–based applications and data estate.

These options include:

  • Azure SQL Managed Instance, a fully managed cloud database that is compatible with SQL Server back to version 2008.
  • A serverless compute version that stops, starts, and scales with your workloads.
  • Elastic pools for managing groups of databases with ease.
  • Hyperscale, the flexible, high-performance service tier that scales to meet demand—all unique to Azure.

Reasons for choosing Azure over other clouds for your SQL Server applications go beyond having access to more options to meet your business and technical needs. SQL Server running in Azure also enables higher savings than AWS with faster performance1, lower downtime2, and AI-based, automatic tuning that continuously maintains peak database performance. Azure SQL is based on the SQL Server database platform, the least vulnerable commercial database in the NIST National Vulnerability Database for over 10 years3.

With Microsoft you can also decide where you want to run your SQL Server databases in your own datacenter, the cloud of your choice, or hybrid and multicloud deployments with Azure Arc–enabled SQL Managed Instance and the recently launched SQL Server 2022.

the inside of a building

SQL Server 2022

The most Azure-enabled release yet, with continued performance, security, and availability innovation.

If you’re considering moving to the cloud, besides getting the most out of a cloud-based SQL Server, you can take the first step with Azure free of charge. I’m pleased to share that the new Microsoft SQL + Apps Migration Factory initiative can support your low-friction migrations of SQL Server and Windows Server–based applications at no cost.

The Microsoft SQL + Apps Migration Factory is a delivery model offered through the Unified/Customer Success resources that gathers skilled professionals, comprehensive processes, and tools to help you plan, execute, and migrate qualifying SQL Server workloads and associated application virtual machines (VMs) to Azure with near-zero code changes. In other words, it blends the technical components of cloud migration with the business and human components—so critical to helping you achieve your goals.

With this offer, you can get specialized assistance to help choose the migration approach that suits your business, such as moving a single workload or having a migration in phases, as well as guided savings and application assessments. All with the main goal of having a seamless and efficient workload migration without business disruption. Once your environment is migrated, our team can also help ensure the SQL Server workload is properly secured and performs adequately.

Hundreds of customers have already signed up to leverage this Customer Success Factory delivery model to accelerate migrations of known low-friction scenarios that do not require refactoring. Typical use cases favored by customers have included out-of-support SQL Server and Windows Servers, dev-test environments, disaster recovery (DR) environments, and applications with a compelling business need to migrate to the cloud in the near term that do not require refactoring or rewriting. This approach also minimizes the risk of business disruption as the migration journey is jumpstarted with these low-friction scenarios and advancing your digital transformation priorities in a phased manner. We like to think of this approach as eating your cake one slice at a time and we can support slices as small or as big as you need. You can even come back for seconds and thirds at any time to advance your migration and modernization journey in an iterative and controlled manner.

So, if your SQL Server estate, including associated Windows Server–based apps, has these or other scenarios that are ready to migrate in the near term without refactoring to optimized Azure SQL and Azure VM destinations, talk to your Microsoft account team about the Factory delivery model or apply now to get started. 

Learn more


1Microsoft Azure SQL Managed Instance benchmark vs. AWS RDS for SQL Server using Azure Hybrid Benefit pricing, May 2022

2Service Level Agreements Summary | Microsoft Azure

3National Vulnerability Database (NVD), National Institute of Standards and Technology

4Subject to the limitations described in the full SQL + Apps Migration Factory program specifications here, and provided that the SQL Server workloads are low complexity with no code changes, Microsoft agrees to assess and migrate SQL Server databases and SQL Server-associated applications from your datacenter or AWS EC2 to Azure at no cost to customer. Migrations must be completed by June 30, 2023. 

The post Easily migrate SQL Server applications to the cloud with Azure appeared first on Microsoft SQL Server Blog.

]]>
Cardinality Estimation Feedback in SQL Server 2022 http://approjects.co.za/?big=en-us/sql-server/blog/2022/12/01/cardinality-estimation-feedback-in-sql-server-2022/ Thu, 01 Dec 2022 16:00:00 +0000 In SQL Server 2022, we have introduced a method of CE Feedback based on actual performance of the query over time. 

The post Cardinality Estimation Feedback in SQL Server 2022 appeared first on Microsoft SQL Server Blog.

]]>
Part of the SQL Server 2022 blog series.

Cardinality estimation (CE) is a process used by the query optimizer to estimate the number of rows returned by a query or sub-part of a query. These estimates rely on two underlying things: statistics about the data and a set of assumptions—known as the model. The model determines how we interpret statistics and data coming from various sub expressions, and so, for any given set of statistics, different models will arrive at different cardinality estimates. 

Database

SQL Server 2022

The most Azure-enabled release yet, with continued performance, security, and availability innovation.

Until SQL Server 2022, CE could only have one model. The set of assumptions used was baked into the code of the server, and so whatever model was picked is what was used for all queries. However, we know that not all queries and workloads perform best under one single model. For some queries, the model we use works out well, but for others, a different model would perform better. With CE Feedback in SQL Server 2022, we can now tailor the model used to generate a query play to the specific query.

CE has always had three basic assumptions that comprise the model: independence (or partial independence), uniformity, and containment. These three assumptions determine how we interpret histograms, and they determine how we combine data during joins or in the presence of multiple predicates. In this blog, I will explain these model variants and what they mean in more detail.

Uniformity

Let’s begin by discussing uniformity assumption. This assumption is used when interpreting data from the on-disk histograms—abstracted data about the columns being queried. We assume that all data within steps, or buckets, of a histogram is uniformly distributed at an average frequency for that bucket. Thus, when we query data, this allows us to determine the number of rows that satisfy the predicate. 

Now, CE Feedback modifies the uniformity assumption only in one special case—that of Row Goal queries. These queries look like TOP n, or Fast n, or IN. and there is a special optimization for row goal queries that relies on the independence assumption. Whenever we believe that a particular value occurs at a high enough frequency (based on our interpretation of the histogram using the independence assumption), we choose to do a quick scan of a few pages assuming that we will get enough qualifying rows very quickly. However, if the data is skewed, we may have falsely assumed more qualifying values than were actually present. This means we scan far more pages than expected to get the requisite number of rows.

CE Feedback can detect such scenarios and turn off the special row goal optimization. If it turns out that the query is indeed faster without this assumption, we keep this change by persisting it in the query store in the form of a query store hint, and the new optimization will be used for future executions of the query.

Non-uniform data chart
Uniform data chart

Independence

Consider a where clause with two predicates, combined with an AND. Something like City=’Seattle” AND State=’WA’. Under the model assumption of independence, we would take the selectivity of the individual predicates (City=’Seattle’, State=’WA’) and multiply those probabilities together. Under the model assumption of correlation, we would take the most selective predicate (City=’Seattle’) and use the selectivity of that predicate only to determine the selectivity of the conjunctive clause. There is a third model of partial correlation, in which we multiply the selectivity of the most selective predicate with a weakened selectivity (raised to a power less than 1, to make the selectivity closer to 1) of the successive predicates. 

Chart showing P1 and P2 independence
Figure 1: Independence. P1 and P2 are independent – that is, the truth (or falsehood) of P1 tells us nothing about the truth or falsehood of P2.
Chart showing partial correlation
Figure 2: Partial Correlation. In cases where we are evaluating P1 = T and P2 = T, you can see that P1 being true gives a higher likelihood of P2 being true. The cases where P1 is false are greyed out because they do not satisfy P1=T.
Chart showing complete correlation
Figure 3: Complete correlation. When evaluating P1=T and P2 = T, we can see that anytime P1 is true, P2 is also true (P1 implies P2).

CE always starts out with this last model of partial independence (referenced in other places as exponential backoff), but with CE Feedback, we can see if our estimates are too high, meaning the predicates are more independent, or too low, meaning that there is more correlation than expected, and adjust the model used for that query and those predicates accordingly for future executions. If this makes the plan or performance better, we persist this adjustment using a query store hint, and use it for future executions.

Containment

The model assumption of containment means that users query data that is actually in the table. Meaning, if there is a column = constant predicate in the table, we assume that the constant actually exists in the table, at the frequency appropriate for where it falls within the histogram. However, we also assume that there is a containment relationship between joins. Basically, we assume that users wouldn’t join two tables together if they didn’t think there would be matches. However, there are two ways of looking at the containment relationship between joins: Base containment, or Simple containment.

Base containment assumes that there is an inherent relationship between the tables participating in a join but doesn’t make assumptions about the filters occurring on top of those tables before the join occurs. A good example might be a table of store sales and a table of store returns. We would assume that all things returned were also sold, but we would not assume that any filtering on the sales or returns tables makes containment more or less likely—we just assume containment at the base table level and scale the size of the estimated result up or down based on the filters in play.

Chart showing simple containment
Chart showing Returns contained in Sales

Simple containment is a bit different—instead of assuming some inherent relationship between the base tables, it assumes that the filters applied to those tables create a containment relationship. For example, querying for graduating seniors from a specific high school and joining that with a query for athletes in a given zip code. While there is some inherent relationship between the two tables a priori, the filters applied specifically limit and create a containment relationship. 

Chart showing a join between two tables
Chart showing filters applied

CE starts with the base containment model for all queries. However, if the estimates for the join are ‘off’ in some way—the incoming estimates are good, the outgoing estimates are bad—we try the alternate containment model. When the query is executed again, we try out the other model, and if it is better, we persist it with a query store hint and use it for future executions.

Conclusion

In summary, CE requires a basic set of assumptions that are used to interpret and combine statistical data about histograms or sub-parts of a query. Those assumptions work well for some queries, and less well for others. In SQL Server 2022, we have introduced a method of CE Feedback which adjusts those assumptions in a per-query way, based on actual performance of the query over time. 

Learn more

The post Cardinality Estimation Feedback in SQL Server 2022 appeared first on Microsoft SQL Server Blog.

]]>
Manage, govern, and secure all your SQL Servers with new hybrid capabilities enabled by Azure Arc  http://approjects.co.za/?big=en-us/sql-server/blog/2022/11/17/manage-govern-and-secure-all-your-sql-servers-with-new-hybrid-capabilities-enabled-by-azure-arc/ Thu, 17 Nov 2022 16:00:00 +0000 The newest edition of SQL Server 2022 delivers continued innovation with hybrid and multicloud capabilities, made possible through Azure Arc.

The post Manage, govern, and secure all your SQL Servers with new hybrid capabilities enabled by Azure Arc  appeared first on Microsoft SQL Server Blog.

]]>
Part of the SQL Server 2022 blog series.

This blog has been updated to reflect the new billing options now available for versions of SQL Server 2014 and above.

In the face of changing market conditions and pressure to accelerate growth, the ability for customers to do more with less is critical. And while the cloud has long been the north star for realizing efficiencies and accelerating innovation, at Microsoft, we understand that these benefits also need to happen outside of the cloud. In 2021 we announced the general availability of Azure Arc-enabled SQL Managed Instance. Previously only available in Azure, Azure Arc-enabled SQL Managed Instance allows customers to build new cloud native applications on any infrastructure, in their on-premises environments and across other public clouds. Now, we are offering a way for customers to make the most of their legacy applications, with Azure Arc-enabled SQL Server.

Yesterday, we announced the general availability of SQL Server 2022, the latest milestone in the 30 year history of SQL Server. This newest edition of SQL Server delivers continued innovation with new hybrid and multicloud capabilities that allow customers to manage and protect their SQL environments in more ways than ever before, no migration required. Together with these innovations, we are also introducing a consumption-based cloud billing model to purchase SQL Server enabled by Azure Arc.

Bring Azure Innovation to SQL Servers Anywhere

Azure Arc is a bridge that extends the Azure platform to help customers build applications and services with the flexibility to run on-premises, at the edge, and in multicloud environments. With Azure Arc, customers can manage their entire SQL estate through a single pane of glass and delivers multilayered security and improved data governance through a single integrated solution.

When customers enable Azure Arc on SQL servers, they can unlock more robust functionalities:

Manage

Manage their entire SQL estate in the Azure portal and get a single, unified view into their entire SQL Server estate to better manage inventory and licenses. Leverage SQL Best Practice Analyzer to automatically evaluate SQL Server environments for performance, scalability, and business continuity, all with no additional cost.​

Govern

Microsoft Purview is a family of data governance, risk, and compliance solutions that can help organizations govern, protect, and manage their environments. With Azure Arc, SQL Server customers will have Purview access policies readily available for those on-premises environments to centralize insights and governance across their entire data estate.

Secure

We also enabled Microsoft Defender for Cloud for SQL Server. Microsoft Defender enhances the security posture of a customer’s SQL Server environments, no matter where they reside. Microsoft Defender continuously scans and proactively identifies any vulnerabilities or attacks that are happening within SQL Server environments. This multi-layered security is made possible through Azure Arc, so data is protected both at the powerful database engine level and by Azure security capabilities from the cloud. 

SQL Server 2022 customers will also be able to benefit from Single Sign-On with Azure Active Directory to secure identities. These are security capabilities that customers have asked for in their on-premises environments for a long time and we are excited to be able to bring them to our SQL server customers.​

Cloud billing model for all SQL workloads*

We are excited to provide customers with a new consumption-based, cloud billing model that provides flexibility to innovate quickly and save on costs. With Pay-as-you-go billing on SQL Server, customers can benefit from:

Azure Arc icon sharing a platform with SQL Server icon extending Azure capabilities to across three different environments: multicloud, edge and on-premises. Multicloud environment shows multiple clouds icons. On premises platform has a datacenter icon with databases. The edge platform shows a satellite, a piece of manufacturing equipment, and a delivery truck.
  • Flexibility to choose between consumption-based licensing or the perpetual SQL Server licenses.
  • Improved cost efficiencies for both Standard Edition and Enterprise Edition. Eliminate the need to pay full upfront licenses and pay for only what is used, by the hour, which is ideal for spike workloads or ad-hoc usage.
  • Support for multicloud and hybrid deployments with consistent billing for all a customer’s entire SQL environment, no matter where they reside.

The PAYG model is enabled by Azure Arc, so connection of the SQL Server to Azure via Azure Arc is required. For SQL Server 2022 customers, the Azure Arc connection is a default part of the SQL Server 2022 setup process and the PAYG billing is available. For customers with prior versions of SQL Server (2014 editions and above), this billing model is available in the Azure Portal once an Azure Arc connection is established. Pricing for the new pay-as-you-go options can be found here.

Innovate anywhere with Azure Arc today​

Azure Arc was built to bring Azure innovation anywhere. With this week’s announcements, our SQL Server customers can unlock cloud-native features on-premises, while benefiting from a more secure and streamlined management experience for their SQL Server environments, no matter where they reside. For SQL Server 2022 customers, these benefits will come already integrated with SQL Server. But customers with SQL Server 2014 and above can unlock many of the same innovations with the help of Azure Arc.

With Azure Arc, SQL Server customers will be able to maximize the value of their investments with technology that allows them to do more with less.

Learn more

Learn about the general availability for SQL Server 2022: general availability for SQL Server 2022.

The post Manage, govern, and secure all your SQL Servers with new hybrid capabilities enabled by Azure Arc  appeared first on Microsoft SQL Server Blog.

]]>
Azure Data Studio November release http://approjects.co.za/?big=en-us/sql-server/blog/2022/11/16/azure-data-studio-november-release/ Wed, 16 Nov 2022 18:30:00 +0000 Table Designer and Query Plan Viewer are now generally available along with previews of Oracle database migration to Azure for PostgreSQL and Azure SQL.

The post Azure Data Studio November release appeared first on Microsoft SQL Server Blog.

]]>
In this release of Azure Data Studio, we have exciting news to share across several of our core features and extensions. The first is the announcement of the general availability of Table Designer and Query Plan Viewer. We would like to extend a huge thank you to our engineering teams who have worked tirelessly over the past few months on improvements to these features. We would also like to thank the MVPs and community members who provided feedback on these features. We are grateful for continued engagement from users as we work to make Azure Data Studio the tool of choice for cloud database management across multiple platforms.

In addition to these two features now being generally available, we are pleased to announce enhancements in the assessment tooling for Oracle database migration to Azure Database for PostgreSQL and Azure SQL, both in preview. The MySQL extension for Azure Data Studio is also available in preview, as is Azure SQL Database Offline Migration support in the Azure SQL Migration extension. We would also like to introduce arm64 macOS support, for which many of you have been patiently waiting for (more details below).

As you may already know, SQL Server 2022 is generally available today, and we have introduced support for this latest version of SQL Server via our deployment wizard. We have made improvements to the connection experience, including a change to the Encrypt property, which now defaults to True. Finally, we completed another set of Visual Studio (VS) Code merges that included numerous bug fixes and UI improvements, some of which are highlighted below.

arm64 macOS support in Azure Data Studio

Over one year ago we had a request to add support to Azure Data Studio for arm64 on macOS. We are pleased to announce that in this release, said functionality is now available. Folks using arm64 macOS will notice that the Azure Data Studio builds for Apple Silicon or Universal will load and run significantly faster as emulation is no longer needed. There are a few extensions that still need to be modified to have the same support and we are working with the appropriate teams to get those changes in place. Interested in seeing support for arm64 on Windows? Share your comments and upvote here.

Change in default value for Encrypt Property

Version 1.40 of Azure Data Studio includes an important change to the Encrypt property for the MSSQL provider connections, which is now enabled by default (set to True). 

In Azure Data Studio 1.39 and below, the Encrypt connection property was on the Advanced page and defaulted to False. As shown below, both the Encrypt and Trust server certificates have been moved to the main Connection Details for Microsoft SQL Server connections, with information icons to provide more detail on hover.

Screenshot of new Connection Details pane.

The Encrypt property continues to support two options:

  1. True (now the default value)
  2. False

Upon upgrading to Azure Data Studio 1.40, users should review the options selected for Encrypt and Trust server certificate before connecting. More information about this change can be found here.

Introducing assessment tooling for Oracle database migrations to Azure Database for PostgreSQL and Azure SQL (preview)

Enabling the migration of Oracle workloads to Azure PostgreSQL and Azure SQL through a unified assessment tool has been a key ask from customers and we are excited to announce the preview release of this experience in Azure Data Studio via the Database Migration Assessment for Oracle extension (check out this demo to see this in action). With these changes, migration planning is simplified for customers looking to modernize their data estate to Azure managed databases. This new assessment scenario helps customers speed up migrations while reducing risk, making it easier than ever to move Oracle databases to Azure. Read more in this blog announcement.

MySQL extension is now available in Azure Data Studio (preview)

As you may have heard at Microsoft Ignite, the MySQL extension for Azure Data Studio is now available in preview, bringing more flexibility to database management. With the MySQL extension for Azure Data Studio, you can now connect to and modify MySQL databases, taking advantage of the modern editor experience and capabilities in Azure Data Studio. You can learn more here.

Announcing Azure SQL Database Offline Migrations in the Azure SQL Migration Extension (preview)

This new migration capability in the Azure SQL Migration extension provides an end-to-end experience to modernize from SQL Server to Azure SQL Database. This extension allows you to perform a migration readiness check with actions to remediate possible migration blockers, export the assessment results, and get right-sized Azure recommendations. These recommendations include an all-new elastic recommendation model to meet your database performance needs. Thanks to the Azure SQL Migration extension, you can perform offline migrations of your SQL Server databases running on-premises, SQL Server on Azure Virtual Machines, or any virtual machine running in the cloud (private, public) to Azure SQL Database. Learn more from this blog announcement. For a hands-on experience using this extension, please refer to the Migrate SQL Server to an Azure SQL Database offline using Azure Data Studio tutorial.

Query History Extension is now generally available

In the August release blog, we noted the benefits of the Query History extension which includes the ability to view previous queries executed and double-click on any query in the history to open it in a new window for viewing or execution. The team has addressed a few accessibility issues, added the ability to limit the number of entries stored, and it is now generally available through the extensions pane in Azure Data Studio. For those of you who write queries and tune code, the history provided from this extension can be a time-saver.

Visual Studio Code Merge—from 1.62 to 1.67

The UI framework of Azure Data Studio is forked from VS Code, and the codebase needs to be kept up to date with updates to its parent framework via merges, completed periodically by our engineers. This Azure Data Studio release includes updates that bring ADS to version 1.67, from its previous version, 1.62. While there are many updates in this release that users will appreciate, we have highlighted a few of our favorites below.

New Side Panel and Configure Layout settings 

Azure Data Studio now offers more flexibility to customize the layout in the user interface via the addition of Side Panels. With Side Panels, you can now house views from the Side Bar or the bottom Panel. Unlike moving the bottom Panel to the left or the right of the editor, the new Side Panel works in addition to the bottom Panel so you can see more sets of views at once. In the illustration below, the terminal is being dragged to the right and dropped to create the Side Panel. The user can then switch views in the Side Bar (by opening the global Search view), while keeping the terminal view visible.

Animated Gif Image

To better configure layouts, we added a Customize Layout button in the title bar. This button provides a new interactive quick pick experience, allowing you to control all aspects of layout configuration in one place.

A GIF of the Azure Data Studio workspace showing different layout configurations.

To learn more about these layout changes, check out the Visual Studio Code January 2022 release notes.

Local history

The local history of files is now available in the Timeline view. Every time you save a file in query editor, a new entry is added to the list. Each local history entry contains the full contents of the file at the time the entry was created and in certain cases, can provide more semantic information—for example, indicate a refactoring. In the illustration below, a CREATE TABLE T-SQL script is edited by inserting a statement to insert an additional column. Upon saving, this new entry is saved to the Local History tab and then compared to its previous version. You can also restore the content to previous versions, as well as delete or rename the entry. To learn more about this update, please check out this Visual Studio Code March 2022 release note.

A GIF showing how to add changed files to the local history via the Timeline view.

Modified menu settings

The Settings editor search control now contains a funnel button on the right side. Clicking on the button shows a list of filters that you can apply to the search query to filter down the results.

Learn more

If you haven’t already installed this release of Azure Data Studio, please see how to on our download page.

The team is focused on improving Azure Data Studio from both a feature and stability perspective, and we hope these improvements make your daily use of ADS even better. 

We would love to hear your feedback on this release—you can find us on Twitter or log an issue on GitHub.

The post Azure Data Studio November release appeared first on Microsoft SQL Server Blog.

]]>