David Pless, Author at Microsoft SQL Server Blog http://approjects.co.za/?big=en-us/sql-server/blog Official News from Microsoft’s Information Platform Fri, 07 Jun 2024 17:46:15 +0000 en-US hourly 1 http://approjects.co.za/?big=en-us/sql-server/blog/wp-content/uploads/2018/08/cropped-cropped-microsoft_logo_element-150x150.png David Pless, Author at Microsoft SQL Server Blog http://approjects.co.za/?big=en-us/sql-server/blog 32 32 Provision Premium SSD v2 Storage for Microsoft SQL Server on Azure Virtual Machines in the Microsoft Azure portal http://approjects.co.za/?big=en-us/sql-server/blog/2024/04/01/provision-premium-ssd-v2-storage-for-microsoft-sql-server-on-azure-virtual-machines-in-the-microsoft-azure-portal/ Mon, 01 Apr 2024 15:00:00 +0000 We’re excited to announce the public preview of the Premium SSD v2 provisioning experience for SQL Server on Azure Virtual Machines.

The post Provision Premium SSD v2 Storage for Microsoft SQL Server on Azure Virtual Machines in the Microsoft Azure portal appeared first on Microsoft SQL Server Blog.

]]>
Guidance on choosing SQL Server storage options in Azure

We’re excited to announce the public preview of the Premium SSD v2 provisioning experience for SQL Server on Azure Virtual Machines (VMs) deployed in the Azure portal. Premium SSD v2 storage improves performance, reliability, and scalability of your SQL Server workloads while offering robust resource capacity, as you can create a single disk with up to 64 TiBs, 80,000 input/output per second (IOPS), and 1,200 MB/s throughput.

When building Azure SQL VMs in the cloud, DBAs have several storage choices they can consider to give their applications the required performance and capacity their workloads require. In Azure, DBAs have compute options along with affordable storage designed to handle mission critical SQL Server workloads. In this article, we’ll review these options focusing on the latest capabilities of Premium SSD v2 storage and the Ebds_v5 Azure VMs, which are better together—providing the best combination of price performance capabilities in the cloud for SQL Server workloads.

Person standing on stairs, looking at a phone in their hand

Premium SSD v2 storage in Azure portal

Deploy SQL Server on Azure VMs with Premium SSD v2 disks using the Azure portal.

Managed disk storage options

DBAs looking to move their mission critical SQL Server applications from on-premises to the cloud have the managed disk options of Premium SSD (Gen 1), Premium SSD v2, and Ultra Disk for their production workloads where Standard SSDs are often used for dev/test and smaller departmental workloads. The goal of this article is to review the Azure VM storage options for SQL Server and give DBAs the tools and information needed to weigh the possible trade-offs in Azure feature availability and overall costs.

Premium SSD (Gen 1)

For most DBAs looking to build scalable storage with optimal price-performance, they use Premium SSD (Gen 1) managed disks for their storage solutions. Using first generation Premium SSD disks, DBAs can provision their VMs by striping disks of usually Premium SSD P30 or P40 disks in a storage pool. A storage pool allows DBAs to multiply disks up to the VM’s limits providing scalability and maximizing price-performance, while Premium SSD (Gen 1) disks allow DBAs to take advantage of Azure reservations and read-only host-caching.

While Premium SSD (Gen 1) storage exhibits remarkable flexibility and operates with minimal Azure feature limitations in the cloud, along with offering price-performance options through reservations for select disk types, it cannot scale to meet the demands of the latest Ebdsv5 VM series.

Additional differences between Premium SSD and Premium SSD v2 include:

  • Premium SSD v2 offers higher disk capacity, up to 64 TiBs per disk, compared to 32 TiBs for Premium SSD.
  • Premium SSD v2 provides more flexibility and control over disk performance, as you can independently adjust the disk size, IOPS, and throughput according to your workload needs, while Premium SSD (Gen 1) has fixed performance tiers based on disk size.
  • Premium SSD v2 has lower latency and higher reliability than Premium SSD as it uses newer hardware and an improved storage platform.
  • Premium SSD v2 supports higher levels of bursting compared to the previous generation, which allows you to achieve additional performance when needed for short periods of time without additional cost.

Ultra Disk

DBAs can also use Ultra Disk to meet the storage demands of the latest generation of Azure VMs as they offer sub-millisecond latency and better performance than Premium SSD (Gen 1). Like Premium SSD v2, Ultra Disk also allows DBAs to dynamically configure and scale the IOPS, throughput, and capacity of their disks independently without having to restart the VM or change the disk size. This makes Ultra Disk an attractive option for data-intensive workloads such as SQL Server that require consistency and high performance with low latency.

However, Ultra Disk also has drawbacks that make it less suitable for some scenarios. For example, Ultra Disk is only available in a limited number of regions and has stricter requirements for VM sizes, zones, and proximity placement groups. Ultra Disk also does not support disk snapshots, disk encryption, Microsoft Azure Site Recovery, or host caching options. Moreover, Ultra Disk is much more expensive than both Premium SSD and Premium SSD v2, especially for larger disk sizes.

Comparing VM and storage deployments

The established guidance for SQL Server VM deployments was to use Premium SSD (Gen 1) in a storage pool configuration with read-only caching for the data files. For the transaction log, we advised using Ultra Disk in cases where DBAs needed lower latency and could handle the limitations. This recommendation was especially the case for our previous hero VM series such as the Edsv4-series which offered the best performance for OLTP workloads at the time.

However, VMs continued to improve and with Azure Boost and other hardware enhancements, the newest Ebs_v5 and Ebds_v5 VMs have proven to be the optimal VM series for SQL Server workloads. The newest Ebs_v5 and Ebds_v5 VMs power higher levels of IOPS and throughput, and now with NVMe storage interface support they can scale well beyond the capabilities of Premium SSD (Gen 1). The Ebs_v5 and Ebds_v5 VMs series and larger VMs on the horizon will require a higher level of storage performance than Premium SSD (Gen 1) was able to provide. A higher level of storage performance is needed to match the capabilities of our newest generation of Azure SQL VMs and to avoid being throttled/capped when your application is pushing higher levels of IOPS/ throughput.

The next generation of Azure VMs will further push well beyond the largest storage needs of our current generation. For example, the largest machine in the previous generation Edsv4-series is the E80ids_v4 which is an Azure SQL VM of 80 vCores, 504 GiBs memory, 80,000 max uncached IOPS, and 1,500 MBps max uncached disk throughput. For a machine of this size, a Premium SSD storage pool would require 16 x P30 disks to provide the same number of IOPS that a single Premium SSD v2 disk could achieve, but with improved latency and less overall cost.

In comparison, the Ebds_v5 series has a VM size of E112ibds_v5 that supports 400,000 max uncached IOPS and 10,000 MBps max uncached disk throughput (Ultra/Pv2-SSD). Premium SSD (Gen 1) would require 80 disks in order to match the IOPs capabilities of this VM, which would exceed the max data disk limit of the VM.

Premium SSD v2 only needs five disks in a storage pool and additionally allows adjusting the IOPS and throughput based on needs for a better overall total cost of ownership (TCO).

Azure SQL VM best practices

Premium SSD v2 has more flexibility than Premium SSDs (Gen 1) and Ultra Disk. You can choose any supported size for a Premium SSD v2 and change the performance parameters without interruption. Premium SSD v2 does not have host caching, but it has much lower latency, which helps with some of the same issues that host caching helps with. The ability to modify IOPS, throughput, and size on demand means you can reduce the management workload of having to combine disks to meet your needs.

To get started, when provisioning a new SQL Server on Azure VM in the Azure portal, you can choose Premium SSD v2 for eligible VMs:

Provisioning a new SQL Server on Azure VM

Premium SSD v2 allows you to change disk size, IOPS, and throughput independently to reach your performance targets, making workloads more cost efficient while also adapting to changing performance needs.

With a capacity of 64 TiBs, 80,000 IOPS, and 1,200 MBps of throughput, most environments can benefit from the performance capabilities of a single Premium SSD v2 disk—but for our largest Azure VMs, Premium SSD v2 disks can be combined into a storage pool to provide the performance required for a single logical drive.

When deploying your SQL Server VM image in the Azure portal, Premium SSD v2 is available for the Ebds_v5 and Ebs_v5 Azure VM series which are optimized for high-performance database workloads.

Configure Storage

The following table helps visualize some of the performance gains and cost savings when using Premium SSD v2 with your Ebds_v5 and Ebs_v5 VMs:

Ebdsv5 and Premium SSDv2 together
* This cost is for pay-as-you-go compute only, assuming Azure Hybrid Benefit for both Windows OS and SQL Server licensing costs.
1The HammerDB TPC-C workload is derived from the TPC-C Benchmark and is not comparable to published TPC-C Benchmark results, as the HammerDB TPC-C workload results do not fully comply with the TPC-C Benchmark.

Learn more about Premium SSD v2 storage for SQL Server on Azure VMs

In summary, Premium SSD v2 offers enhanced performance, granular scalability, and cost-effectiveness for applications demanding sub-millisecond disk response times. While it provides more capabilities, the actual cost difference between Premium SSD, Premium SSD v2, and Ultra Disks depends on factors such as region, disk size, IOPS, and throughput. You can use the Azure pricing calculator to estimate costs based on your specific needs.

If you are deploying SQL Server VMs using the Azure portal and want to utilize Premium SSD v2, note that it is currently limited to the Ebds_v5 or Ebs_v5 series VMs in this public preview phase.

We’re committed to providing our customers with the best possible experience when running SQL Server on Azure VMs. The addition of Premium SSD v2 storage is another step toward that goal.

Try out Premium SSD v2 storage for SQL Server on Azure VMs today and please share your feedback with us. We look forward to hearing from you as we continue to improve our offerings for SQL Server on Azure VMs.

To get started, check out Use Premium SSDv2 storage with your SQL Server on Azure VMs.

You can also keep an eye on the What’s new page for all the latest and greatest updates to SQL Server on Azure VMs and What’s new for Azure storage.

The post Provision Premium SSD v2 Storage for Microsoft SQL Server on Azure Virtual Machines in the Microsoft Azure portal appeared first on Microsoft SQL Server Blog.

]]>
SQL Server 2022: Intel® QuickAssist Technology overview http://approjects.co.za/?big=en-us/sql-server/blog/2023/03/09/sql-server-2022-intel-quickassist-technology-overview/ Thu, 09 Mar 2023 19:00:00 +0000 SQL Server backup compression provides the compressed backup option for all streaming SQL Server backup types—full, differential, and transaction log.

The post SQL Server 2022: Intel® QuickAssist Technology overview appeared first on Microsoft SQL Server Blog.

]]>
Part of the SQL Server 2022 blog series

In SQL Server 2008 Enterprise Edition, backup compression was introduced adding WITH COMPRESSION as an option in the T-SQL backup command. SQL Server backup compression provides the compressed backup option for all streaming SQL Server backup types—full, differential, and transaction log. Using the metadata in the backup file header, the RESTORE command decompresses compressed backup files. Compressing backup data benefits customers in two ways: first, the backup files will consume less storage, and second, since there are fewer bytes being transferred, the backups run faster than uncompressed backups.
 
The default configuration in SQL Server is to not compress backups, but many users change this default globally using sp_configure.

While customers use the current compression standard, the challenge many customers have with the current compression implementation is that the compression ratios are frequently too low to justify the additional compute overhead required to compress the backup files.
 
Additionally, customers cannot offload the compute overhead of the compression task. Many customers in high-volume online transaction processing (OLTP) environments put their business continuity and disaster recovery (BCDR) strategy at risk by delaying their regular backup schedules in order to remain operational for customer workloads.

Intel® QuickAssist (Intel QAT) technology in SQL Server 2022, solves these challenges by providing hardware acceleration and CPU offload capabilities for enhanced compression and decompression functions. This feature is designed to give tangible improvements both in terms of reduced CPU usage and backup completion times, as well as a reduction in storage consumption.

The interface for Intel® QAT devices uses a software library called QATzip where the QATzip API maintains a built-in software fallback mechanism. This fallback mechanism allows administrators to hot-add/remove devices, better resist hardware failure scenarios, and provide software-based compression where needed.

The SQL Server 2022 feature provides integrated compression/decompression capabilities by using the QATzip library and the associated Intel® Intelligent Storage Acceleration Library (ISA-L). Intel® ISA-L is installed alongside QATzip where the presence and enablement of both the QATzip and Intel® ISA-L assemblies is a prerequisite for enabling instance-level server configuration for Intel® QAT hardware acceleration.

QATzip currently supports offloading backups to hardware devices, either through a peripheral component interconnect express (PCIe) adapter or as part of the host server chipset, and also supplying a software-based compression capability that can be used if there isn’t hardware available.

The SQL Server 2022 server-scope T-SQL syntax extends the current BACKUP DATABASE command to make it easy for database administrators to choose the desired algorithm to compress backups.

Additionally, the sp_configure configuration option is available to adjust backup compression defaults while introducing a new dynamic management view in order to query the configuration state.

The implementation of this feature surfaces appropriate error log and informational messaging to troubleshoot the Intel® QAT configuration as well as being able to track the benefits of the Intel® QAT compression feature over time by surfacing the compression algorithm history in the msdb database.

Portrait of man with a tie and plaid button down shirt sitting on a couch facing the camera, smiling with his hands folded.

SQL Server 2022

Learn about the new features on security, platform, management, and more.

Business cases for backup compression with Intel® QAT

The organizations that will find the most benefit from the Intel® QAT are those that have high needs around maintaining customer workloads, such as mission critical OLTP and online analytical processing (OLAP) environments, while also ensuring their organization’s data recovery strategy maintains the organizations service level agreements (SLAs). These organizations must maintain application workloads to meet customer expectations while ensuring the organization’s data recovery strategy is never put at risk.

Often these organizations are highly virtualized and are running SQL Server instances at, or very close to, 100 percent processing time to maximize hardware and software licensing investments.

Reference:
Online transaction processing (OLTP) – Microsoft Docs
Online analytical processing (OLAP) – Microsoft Docs

Intel® QAT driver overview

Customers should download the Intel drivers themselves directly from the Intel QuickAssist supportability page. The supportability page will give an overview of the Intel® QAT, list driver versions per operating system, and maintain the release notes for the Intel® QAT drivers.

Note: The official Intel® QAT Driver for Windows, which for the writing of this documentation is version 1.8.0-0010, is required to be installed separately from the SQL Server 2022 installation.

For the initial release of QAT-enabled backups for SQL Server 2022, the feature will be available for SQL Server 2022 on Windows Server only. As a host operating system, Windows Server 2016, Windows Server 2019, and Windows Server 2022 are supported.

When the drivers are installed and server-scope configuration option is enabled for SQL Server, it is expected that the QATzip library will be available in the C:\Windows\ system32\ path for QATzip and the Intel® ISA-L will be available in C:\Program Files\Intel\ISAL\* path.

This path is true for both hardware and software-only deployment scenarios.

Customers can install the drivers before the SQL Server 2022 installation in preparation for a disaster recovery site, for system standardization, or after the SQL Server 2022 installation, even without QAT hardware. In either case, it is required for customers to maintain the driver version. 

Preparing a disaster recovery server site

It’s possible to install the QATzip library on a server with no QAT hardware acceleration devices available (virtual or physical). In such cases, SQL Server will still load the QATZip.dll assembly provided the ALTER SERVER CONFIGURATION option for HARDWARE_OFFLOAD option is enabled for QAT.

In this scenario, users can specify the ALGORITHM for COMPRESSION as QAT_DEFLATE and rely on the software fallback provided by the Intel® ISA-L to perform the compression operation using the same format and encoding as provided by QAT devices.

Below are links to the currently supported Windows Server driver version, the Intel® QAT release notes, and the Intel® QAT landing page, which will be used as the primary location for Intel® QAT support.

Intel® QuickAssist Technology landing page
Intel® QuickAssist Technology Driver for Windows* (1.8.0-0010)
Intel® QuickAssist Technology Driver for Windows release notes 

SQL Server 2022 Edition support for Intel® QAT

SQL Server 2022 implements different levels of Intel® QAT support based on the SQL Server edition. The SQL Server 2022 Enterprise edition will use hardware offloading if an Intel QAT device is available, otherwise, it will fall back to software-based compression if the Intel QAT device is not available for any reason.

SQL Server 2022 Standard Edition will only allow Intel® QAT backups with software-only compression, even if an Intel QAT device is available.

SQL Server 2022 Express Edition will allow Intel® QAT backups to be restored but will only allow backups to be performed with the default MS_XPRESS algorithm.

Note: The Intel® QAT drivers are required to be installed in order to perform backups and restores using the Intel® QAT feature.

SQL Server 2022 configuration for Intel® QAT for backup compression

In order to back up databases with the Intel® QAT compression algorithm, it is required to have the drivers installed, and the SQL Server must then be permitted to load the drivers into the SQL Server 2022 process space.

To complete this task, a new option has been added to ALTER SERVER CONFIGURATION in SQL Server 2022 with the HARDWARE_OFFLOAD syntax to enable or disable the use of hardware acceleration for a SQL Server instance. With this configuration being applied, after a restart of the SQL Server service, the Intel® QAT hardware acceleration technology can be used to provide a new optional backup compression algorithm.
 
Note: The HARDWARE_OFFLOAD option requires a restart of the SQL Server instance to take effect.

After the feature is enabled, every time the SQL Server service starts, the SQL Server process will look for the required user space software library that interfaces with the hardware acceleration device driver API and will load the software assemblies if they are available. The Intel® QAT user space library is QATzip, which provides a number of features. The QATzip software library is a user space software API that can interface with the QAT kernel driver API. It is used primarily by applications that are looking to accelerate the compression and decompression of files using one or more QAT devices.

In the case of the Windows operating system, there is a complimentary software library to QATzip, the Intel® Intelligent Storage Acceleration Library (ISA-L). This serves as a software fallback mechanism for QATzip in the case of hardware failure as well as a software-based option when the hardware is not available.

Note: The unavailability of a QAT hardware device will not prevent SQL instances from performing backup or restore operations using the new QAT_DEFLATE algorithm.

Software compression or decompression using Intel® ISA-L still requires that the QATzip software library is installed and that HARDWARE_OFFLOAD server configuration has been enabled for QAT. This is because SQL Server loads the QATzip.dll assembly provided by QATzip and it handles the fallback to Intel® ISA-L.
 
Additionally, software-based compression will be available for SQL Server 2022 Standard Edition allowing Standard Edition customers the ability to take advantage of software-based accelerators where SQL Server Enterprise Edition 2022 customers will have the extended capability of offloading backups to an external device.

Enabling Intel® QAT backup compression

The ALTER SERVER CONFIGURATION statement will be used to enable or disable access to QAT devices. Using the HARDWARE_OFFLOAD statement, as shown below, will persist a bit on the SQL Server boot page to load QAT binaries at startup.

ALTER SERVER CONFIGURATION
 SET HARDWARE_OFFLOAD = ON (ACCELERATOR = QAT)

After executing the statement, users would then restart the SQL Server service to have the QAT binaries loaded into the SQL Server process space.

Note: The default configuration will be to use the Microsoft default MS_XPRESS compression capabilities.

The ALTER SERVER CONFIGURATION statement below will disable the Intel® QAT feature for SQL Server 2022.

ALTER SERVER CONFIGURATION
 SET HARDWARE_OFFLOAD = OFF (ACCELERATOR = QAT)

Hardware acceleration configuration and availability group replicas

The HARDWARE_OFFLOAD server-scope configuration option will be compatible with Availability Groups. If the setting is enabled on a primary replica, the configuration option will be a persistent server-scope property such that upon failover from a primary to a secondary replica, an attempt will be made to load the correct assemblies on the new primary replica. The recommendation will be to enable the server-scope setting on all replicas after installing the latest Intel® QAT drivers on each replica.

Verifying Intel® QAT accelerator configuration

In order to verify the Intel® QAT accelerator configuration, a number of methods can be used including a new dynamic management view (DMV), sp_configure and the sys.configurations catalog view, and the SQL Server 2022 error log.

sys.dm_server_hardware_offload_config

The SQL Server sys.dm_server_hardware_offload_config dynamic management view can be used to verify the Intel® QAT configuration state.

The addition of this dynamic management view will make it much easier to verify the SQL Server accelerators that are configured for use, and those that have been confirmed to be loaded. This dynamic management view (DMV) should be the primary method to verify the configuration state of the available accelerators.
 
Example:
SELECT * FROM sys.dm_server_hardware_offload_config

GO

Image of code: SELECT * FROM sys.dm_server_hardware_offload_config GO

sp_configure and the sys.configurations catalog view

The SQL Server backup compression default behavior can be adjusted. You can change the server default configuration as well as other options. You can enable or disable hardware acceleration, you can enable backup compression as a default, and you can also change the default compression algorithm as by using sp_configure.

The status of these options is reflected in the sys.configurations table.

As mentioned previously, you can discover the hardware offload configuration by querying the sys.dm_server_hardware_offload_config dynamic management view.

Backup compression default

To discover the backup compression default state, you can use the query below: 

SELECT * FROM sys.configurations

WHERE name = 'backup compression default';

Image of code: SELECT * FROM sys.configurations WHERE name = 'backup compression default'

Changing this configuration is permitted through the sys.sp_configure stored procedure.

For example:

EXEC sp_configure 'backup compression default', 1; 

RECONFIGURE;

No restart of SQL Server is required for this change to take effect. When this default option is changed to 1 (i.e., WITH COMPRESSION), MS_XPRESS will continue to be the default compression option, provided the backup compression algorithm has not been changed.

Backup compression algorithm

To discover the backup compression algorithm being used, you can use the query below:

SELECT * FROM sys.configurations 

WHERE name = 'backup compression algorithm';

Image of code:  SELECT * FROM sys.configurations    WHERE name = 'backup compression algorithm'

The “configure backup compression algorithm” configuration allows customers to change the default compression algorithm so they can determine if they want Intel® QAT to be the default compression algorithm for SQL Server.

Backup compression enabled

There will be a global sp_configure option to enable/disable hardware offloading.

Note: The value will be either 0 or 1 and will require a reboot to take effect.

sp_configure ‘hardware offload enabled’, 0

GO

Creating a backup using QAT compression

The T-SQL BACKUP command WITH COMPRESSION has been extended to allow for a specified backup compression algorithm. When using Intel® QAT for backup compression acceleration, the algorithm QAT_DEFLATE will initiate an Intel® QAT compressed backup if the drivers are available and the SQL Server configuration has been completed.

Note: The standard compression algorithm will be referred to as MS_EXPRESS and will remain the default compression option.

The ALGORITHM command will be used to specify either of these two algorithms (QAT_DEFLATE, MS_EXPRESS) for backup compression.

The example below will perform backup compression using Intel® QAT hardware acceleration.

BACKUP DATABASE testdb TO DISK='D:\Backups\testdb.bak'
WITH COMPRESSION (ALGORITHM = QAT_DEFLATE);

Either of these statements will use the default MS_XPRESS compression engine.

BACKUP DATABASE testdb TO DISK='D:\Backups\testdb.bak'
WITH COMPRESSION (ALGORITHM = MS_XPRESS);

BACKUP DATABASE testdb TO DISK='D:\Backups\testdb.bak'
WITH COMPRESSION;

The extension to the T-SQL syntax for backup compression provides the addition of the ALGORITHM option, allowing the options MS_XPRESS (default) or QAT_DEFLATE (Intel® QAT-based compression) in SQL Server 2022.

The table below gives a summary of the BACKUP DATABASE with COMPRESSION options in SQL Server 2022.

BACKUP STATEMENTOVERVIEW
BACKUP DATABASE <database_name> TO {DISK|TAPE|URL}Backup with no compression or with compression depending on default setting
BACKUP DATABASE <database_name> TO {DISK|TAPE|URL} WITH COMPRESSIONBackup using XPRESS compression algorithm
BACKUP DATABASE <database_name> TO {DISK|TAPE|URL} WITH COMPRESSION (ALGORITHM = MS_XPRESS)Backup with compression using XPRESS algorithm. There is an argument for permitting use of DEFAULT or NATIVE as permitted options.
BACKUP DATABASE <database_name> TO {DISK|TAPE|URL} WITH COMPRESSION (ALGORITHM=QAT_DEFLATE)Backup with compression using the QATZip library using QZ_DEFLATE_GZIP_EXT with compression level 1.

SQL Server 2022 RESTORE process

The RESTORE command does not include a COMPRESSION option as the backup header metadata specifies if the database is compressed and, therefore, the storage engine can restore from the backup file(s) accordingly. The backup metadata will be extended to include the compression type.

Running RESTORE HEADERONLY on a backup without compression and a backup compressed with the default MS_XPRESS algorithm will return result sets similar to the commands below:

RESTORE HEADERONLY
 FROM DISK=’C:\temp\QAT-DEFLATE.bak’
 GO RESTORE HEADERONLY
 FROM DISK=’C:\temp\MS-XPRESS.bak’
 GO

Image of ‘The RESTORE HEADERONLY command’ code

The RESTORE HEADERONLY command will display the compression algorithm along with other information such as the backup name, backup description, backup type, compression state, device type, and other columns. In SQL Server 2022, we will be adding the compression algorithm column.

Backups that have been compressed with either QAT_DEFLATE or MS_XPRESS, and uncompressed backups will all use the same T-SQL syntax RESTORE DATABASE DDL commands.

In the example below, the T-SQL restore syntax is valid for restoring from the testdb.bak backup file, regardless of whether it is an uncompressed backup, or a compressed backup created using QAT_DEFLATE or MS_XPRESS.

RESTORE DATABASE testdb FROM DISK=’E:\Backups\testdb.bak’
WITH RECOVERY;

SQL Server backups compressed using QAT_DEFLATE will support all T-SQL RESTORE operations. The RESTORE {DATABASE|LOG} statements for restoring and recovering databases from backups will support all recovery arguments, such as WITH MOVE, PARTIAL, STOPAT, KEEP REPLICATION, KEEP CDC, and RESTRICTED_USER.

Auxiliary RESTORE commands will also be supported for all backup compression algorithms. Auxiliary RESTORE commands include RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE VERIFYONLY, and more.

Note: If the server-scope configuration HARDWARE_OFFLOAD option is not enabled, and/or the Intel® QAT drivers have not been installed, an error will be surfaced instead of attempting to perform the restore.

To restore an Intel® QAT compressed backup, the correct assemblies must be loaded on the SQL Server instance initiating the restore operation.

Best practices

The use of a BLOCKSIZE of 65536 (64KB) will be recommended, but options such as BUFFERCOUNT and MAXTRANSFERSIZE will continue to remain dependent on the user’s environment if changes from the default are being made.

Learn more

For more information, and to get started with SQL Server 2022, check out the following references:

Read What’s new in SQL Server 2022 for all the new features on security, platform, management, and more.

Read the Microsoft Integrated acceleration and offloading guidance to get started with Intel® QuickAssist technology in SQL Server 2022.

Go to the Intel® QAT portal for the latest Intel information on Intel® QAT device drivers and application support along with technical documentation and whitepapers.

The post SQL Server 2022: Intel® QuickAssist Technology overview appeared first on Microsoft SQL Server Blog.

]]>
Query Store hints in SQL Server 2022 http://approjects.co.za/?big=en-us/sql-server/blog/2022/09/08/query-store-hints-in-sql-server-2022/ Thu, 08 Sep 2022 15:00:00 +0000 Query Store is one of the most powerful database-scoped features in SQL Server for troubleshooting performance and improving the stability of your database workloads.

The post Query Store hints in SQL Server 2022 appeared first on Microsoft SQL Server Blog.

]]>
Part of the SQL Server 2022 blog series.

Query Store is one of the most powerful database-scoped features in SQL Server for troubleshooting performance and improving the stability of your database workloads, and we have continued to make investments in this technology since its release in SQL Server 2016.

Query Store is often described as a “flight recorder” for SQL Server giving SQL Server the ability to store query text, query plans, and query performance history at the database scope for troubleshooting and performance analysis. Query Store also provides a method to force which plans a query will use which is a capability that allows database administrators (DBAs) to reactively choose a healthier plan in the case of a poorly performing query.

The ability to control which plans a query will use makes it much easier to respond to bad query behavior and provides database administrators a safe and comfortable method to upgrade to later versions of SQL Server, change database compatibility levels, and confidently migrate database applications to the cloud.

In the previous Query Store on by default blog, we covered how Query Store will now be enabled by default for new databases created on SQL Server 2022.  

As a result, in SQL Server 2022, we will be taking advantage of Query Store’s ability to store query-related metadata in order to bring forward several powerful new features. 

The first feature that we will discuss in this series, is the ability to store and persist Query Store hints empowering a new approach to troubleshooting and stabilizing your application’s database performance without ever having to change a single line of your application’s code.

In fact, you can even use Query Store hints without having to change your database compatibility level.

So, not only can we choose healthier plans using Query Store, but we can now specifically apply hints to queries stored in Query Store to address targeted issues or invoke a specific response.

Influencing Query plan behavior

Before we jump into Query Store hints, we should review the concept of query hints which are currently used to influence query plan behavior. Many of us are already leveraging hints in our code for a number of reasons, for example, to force indexes or recompiles to mitigate unstable query plans. Query hints are specified via the OPTION clause and while query hints help provide solutions to performance-related issues they do require altering the query text meaning we have to change our application’s code.

Why change query plan behavior?  

Ideally, the Query Optimizer selects an optimal execution plan for a query. This is what it’s designed to do—but this doesn’t always happen. In these cases, a DBA may need to optimize for specific conditions otherwise we get stuck with a bad plan and a bad plan will often lead to bad performance.

We may have a stored procedure, for example, where the cardinality of a SELECT can vary wildly causing you to need to use a RECOMPILE hint. You may need to put a limit on the memory grant size for a bulk insert operation, you could need to limit the maximum degree of parallelism for a report that runs on the same system as your critical online transactional processing environment, or even use a lower database compatibility level for a particular query to prevent having to lower the compatibility level at the database because you are realizing other benefits at the higher db_compat level.

In short, there are many reasons you may need to influence plan behavior.

Code example of a query hint.

In the example shown above, we are using a query hint to disallow batch mode and also telling SQL Server to recompile the query on each execution. It is clear that these targeted approaches may be helpful to solve specific issues, but again, it requires changing code.  

Most but not all query hints are supported as Query Store hints. The available query hints are documented in sys.sp_query_store_set_hints.

Applying Query hints today

There are many reasons to leverage query hints, but they require making changes to the application queries and this is usually something that cannot be done without application owner/vendor support. 

Database administrators may not always be able to make these changes directly to the T-SQL code. This is true for many production environments and is definitely the case for shrink-wrap vendor-based solutions.

There haven’t been many options for DBAs who are looking for a direct and safe way to influence query behavior without changing application code. Previously, they had to rely on plan guides, which were notoriously difficult to use.

What are Query Store hints?

Five colorful boxes each with text outlining what Query Store hints are.

Query Store hints provide a direct method for developers and DBAs to shape query plans without changing application code.  

Query Store hints are a new feature that extends the power of Query Store—but this means that Query Store hints does require the Query Store feature to be enabled and that your query and query plan are captured in the Query Store.

Just like plan guides, Query Store hints are persisted and will survive restarts, but Query Store hints are much easier to use than plan guides.

Query Store hints override other hard-coded statement-level hints and plan guides.

Query stability is important, so with Query Store hints, queries will always execute as opposing Query Store hints will be ignored. For example, one thing to know is that the RECOMPILE hint is not compatible with forced parameterization set at the database level, but this does not cause the query to fail.

If the database has forced parameterization set, and the RECOMPILE hint is part of the hints string set in Query Store for a query, SQL Server will ignore the RECOMPILE hint and will apply any other hints as they are leveraged.

Arrows showing the stages in the lifecycle of Query Store hints.

Using Query Store hints

The lifecycle for Query Store Hints follows these basic steps:

  1. First, the query must be executed.
  2. The query, plan, and execution details are then captured into the Query Store. This is dependent on the current Query Store capture policy, which can be customized using QUERY_CAPTURE_POLICY.
  3. The DBA creates a Query Store hint on a query using sp_query_store_set_hints.
  4. The query is executed using the Query Store hint.

To use Query Store hints, do the following:

  1. Identify the Query Store query_id of the query statement you wish to modify. You can do this in various ways:
    • Querying the Query Store catalog views.  
    • Using SQL Server Management Studio built-in Query Store reports.  
    • Using Azure portal “Query Performance Insight” for Azure SQL Database.
  2. Execute sp_query_store_set_hints with the query_id and query hint string you wish to apply to the query.

The query hint string can contain one or more query hints as can be seen in the example below:

EXEC sys.sp_query_store_set_hints @query_id = 39,
@query_hints = N'OPTION(RECOMPILE, MAXDOP 1,  
USE HINT(''QUERY_OPTIMIZER_COMPATIBILITY_LEVEL_110''))'; 

Query Store hints override hard-coded statement level hints and existing plan guide hints.

Database administrators can clear hints using sp_query_store_clear_hints by passing in the @query_id.

If no Query Store hint exists for a specific query_id, a new Query Store hint will be created and if a Query Store hint already exists for a specific query_id, the last value provided will override previously specified values for the associated query.

If a query_id doesn’t exist, an error will be raised as it is required that the query exists in Query Store.

If a query hint contradicts what is possible for query optimization, the hint will not block query execution and the hint will not be applied. In the cases where a hint would cause a query to fail, the hint is ignored, and the latest failure details can be viewed in sys.query_store_query_hints.

You can review the current Query Store hints in sys.query_store_query_hints and review any failures using the query below:

SELECT query_hint_id, query_id, query_hint_text,
last_query_hint_failure_reason, last_query_hint_failure_reason_desc,
query_hint_failure_count, source, source_desc
FROM sys.query_store_query_hints;
GO

When hints are applied, Execution Plan XML attributes will surface in the StmtSimple element of the Execution Plan in XML format. These XML elements are available via the output of the Transact-SQL commands SET STATISTICS XML and SET SHOWPLAN XML

The QueryStoreStatementHintText will show the actual Query Store hint(s) that have been applied to the query, the QueryStoreStatementHintId will show the unique identifier of a query hint, and the QueryStoreStatementHintSource will surface the source of the Query Store hint (example: “User”).

Next steps

Query Store “on by default” is just one of the many benefits of migrating to SQL Server 2022.

Download the latest release of SQL Server 2022 if you haven’t already done so and check out the SQL Server 2022 Overview and What’s New references. There are many new features and improved functionality being added to this release.

Learn more

For more information and to get started, check out the following reference:

Read What’s New in SQL Server 2022.

Additional useful resources:  

The post Query Store hints in SQL Server 2022 appeared first on Microsoft SQL Server Blog.

]]>
Query Store is enabled by default in SQL Server 2022 http://approjects.co.za/?big=en-us/sql-server/blog/2022/08/18/query-store-is-enabled-by-default-in-sql-server-2022/ Thu, 18 Aug 2022 15:00:00 +0000 In SQL Server 2022, Query Store is now enabled by default for all newly created SQL Server databases.

The post Query Store is enabled by default in SQL Server 2022 appeared first on Microsoft SQL Server Blog.

]]>
Part of the SQL Server 2022 blog series.

Query Store is one of the most powerful database-scoped features in SQL Server for troubleshooting performance and improving the stability of your database workloads.

The Query Store feature first became available in SQL Server 2016 and provides the ability for database administrators (DBAs) to gain insights on query plan choice and the impact these choices have on SQL Server performance.

In SQL Server 2022, Query Store is now enabled by default for all newly created SQL Server databases to help customers better track performance history, troubleshoot query plan–related issues, and enable new query processor capabilities that we will introduce in the subsequent SQL Server 2022 blog series.

When enabled, Query Store asynchronously captures a history of queries, plans, and runtime statistics, and persists this data at the database scope. This means that when you migrate your databases to another version of SQL Server, migrate to different hardware, or even migrate your databases to the cloud—you can examine the performance difference down to the query plan level.

Query Store accomplishes this by separating the performance data by time windows so DBAs can identify usage patterns and understand when query plan changes happened on the server.

If an application starts having a performance issue which can occur in a number of scenarios such as high usage, post-migration, and especially when data distributions change—Query Store provides a method to force which plan a query will use. The ability to control which plans a query will use makes it much easier to respond to errant query plans and provides database administrators with the assurance they can upgrade to later versions of SQL Server, change database compatibility levels, and confidently migrate database applications to the cloud.  

For all these scenarios, Query Store gives DBAs the confidence that they can easily monitor their database application’s performance and quickly respond to performance issues when needed.

Query Store is available for SQL Server, Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. In Azure SQL database, Query Store is referred to as Query Performance Insights and is managed on the Query Performance Insight navigation blade.

Query Performance Insights in Azure SQL Database

While it is a new scenario to have Query Store enabled by default for SQL Server, Query Store has be enabled by default for Azure SQL Database (including elastic pools) and Azure SQL Managed Instance databases for some time now and this capability has been extended to SQL Server 2022.

Query Store has been a popular feature since it was first made available in SQL Server 2016, and now in SQL Server 2022 we are leveraging Query Store’s ability to persist query plan history and health to enable new capabilities.

Query Store in SQL Server 2022 extends the ability of the query processor by enabling features such as Query Store hints, cardinality estimation (CE) feedback, degree of parallelism (DOP) feedback, memory grant feedback (MGF) persistence, and optimized plan forcing.

Query Store’s ability to asynchronously make a record of your database application’s performance data enables these new capabilities in SQL Server 2022.

While we are enabling Query Store by default for all newly created databases in SQL Server 2022, it is important to note that for all databases that have been restored from other SQL Server instances and for those databases that are upgraded from an in-place upgrade to SQL Server 2022, these databases will retain the previous Query Store settings—whether they had Query Store enabled or not.

As a post-migration step, for databases that are restored from previous SQL Server instances, it is recommended to enable Query Store and separately evaluate the database compatibility level settings as some Intelligent Query Processing features are enabled by the compatibility level setting.

Note: For optimal SQL Server performance and migration health, it is recommended to closely follow the recommendations provided in the post-migration validation and optimization guide.

Query Store overhead

As mentioned, Query Store has been available in Azure SQL Database and Azure SQL Managed Instance since 2016; additionally, we have improved Query Store performance in the last several releases of SQL Server with many improvements being surfaced through cumulative updates in SQL Server 2019.

Query Store improvements introduced via cumulative updates

Microsoft has improved ad-hoc workloads with Query Store with several cumulative updates in SQL Server 2019.

The most relevant cumulative updates that influence Query Store health are:

  • KB5000642—Cumulative Update 9 for SQL Server 2019: Fixes Query Store scalability improvement for ad-hoc workloads.
  • KB4577194—Cumulative Update 8 for SQL Server 2019: Query Store scalability improvement for ad-hoc workloads. Query Store now imposes internal limits to the amount of memory it can use and automatically changes the operation mode to READ_ONLY until enough memory has been returned to the Database Engine, preventing performance issues.
  • KB4563110—Cumulative Update 6 for SQL Server 2019: This improvement can force the Query Store option to be turned off by specifying the additional option FORCED in the ALTER DB command. The FORCED option allows you to turn off Query Store immediately by aborting all background tasks. ALTER DATABASE {0} SET QUERY_STORE = OFF (FORCED)

Note: It is always recommended to evaluate the latest cumulative updates for the release of SQL Server your organization is maintaining for the best performance and stability of your database application environment.

Query Store setting improvements

In addition to the stability improvements, we introduced via the cumulative updates in SQL Server 2019, the Query Store default behavior has been refined to reduce the volume of the captured Query Store details while increasing the amount of Query Store history that can be retained.

The setting changes that were introduced in SQL Server 2019 have been carried forward and will improve the performance of the Query Store behavior while still optimizing the Query Store’s ability to serve both as a “flight data recorder,” but to also enable the query processor’s ability to leverage new capabilities in SQL Server 2022.

To improve the performance of the Query Store settings, the capture mode has changed from ALL to AUTO, and the max size (MB) has changed from 100 MB to 1024 MB.

Under the AUTO capture mode setting, we now capture Query Store details when any of the following thresholds are hit:

  • 1 second = any compilation
  • 100 milliseconds = execution CPU time
  • 30 executions = execution count

These settings help further reduce any impact Query Store could have on SQL Server, while still ensuring that the critical data is captured for troubleshooting and providing the ability to enable new capabilities in SQL Server 2022.

Custom capture policies

In order to take advantage of certain SQL Server 2022 capabilities, it is necessary to have Query Store enabled by default.

If there is still any concern about the overhead Query Store may introduce, database administrators can leverage custom capture policies to further tune the Query Store capture behavior.

Custom capture policies are available to help further tune Query Store captures. Custom capture policies can be used to be more selective about which queries and query details are captured. For example, an administrator may choose to capture only the most expensive queries, repeated queries, or queries that have a high level of compute overhead.

Custom capture policies can help Query Store capture the most important queries in your workload.

Please see the example ALTER DATABASE script below that would enable a custom capture policy for Query Store:

ALTER DATABASE [QueryStoreDB]
 SET QUERY_STORE = ON
 (
 OPERATION_MODE = READ_WRITE,
 CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = 90 ),
 DATA_FLUSH_INTERVAL_SECONDS = 900,
 MAX_STORAGE_SIZE_MB = 1000,
 INTERVAL_LENGTH_MINUTES = 60,
 SIZE_BASED_CLEANUP_MODE = AUTO,
 MAX_PLANS_PER_QUERY = 200,
 WAIT_STATS_CAPTURE_MODE = ON,
 QUERY_CAPTURE_MODE = CUSTOM,
   QUERY_CAPTURE_POLICY = (
   STALE_CAPTURE_POLICY_THRESHOLD = 24 HOURS,
   EXECUTION_COUNT = 30,
   TOTAL_COMPILE_CPU_TIME_MS = 1000,
   TOTAL_EXECUTION_CPU_TIME_MS = 100
   ) );

Note: Except for STALE_CAPTURE_POLICY_THRESHOLD, these options define the OR conditions that need to happen for queries to be captured in the defined Stale Capture Policy Threshold value.

QUERY_CAPTURE_MODE = CUSTOM,
 QUERY_CAPTURE_POLICY = ( 
STALE_CAPTURE_POLICY_THRESHOLD = 24 HOURS, 
EXECUTION_COUNT = 30, 
TOTAL_COMPILE_CPU_TIME_MS = 1000, 
TOTAL_EXECUTION_CPU_TIME_MS = 100 
)

Summary

Query Store in SQL Server has been a popular feature since the SQL Server 2016 release to provide the ability for Query Store to act as a “flight data recorder” for SQL Server. Query Store accomplishes this by collecting information about query performance over time. As a result, Query Store has a strong capability to reduce the time it takes for a database administrator to respond to critical performance events and help ease the migration of databases between on-premises systems (hardware and SQL Server releases) as well as migrate database applications to the cloud.

DBAs use Query Store in many of the following scenarios to:

  • Ensure the health of database upgrades and migrations.
  • Discover and address application performance regressions.
  • Tune in the most expensive queries based on resource consumption (elapsed time, compute overhead, memory, I/O, and more).
  • Maintain performance between database compatibility level upgrades.
  • And many other Query Store Usage Scenarios.

SQL Server 2022 now supports the enabled by default behavior and introduces the ability for the Query Processor to be able to leverage the historical performance data.

This capability extends Query Store’s power beyond troubleshooting and migration scenarios, and to the next level of intelligent query processing behavior in SQL Server.

Next steps

Download the latest release of SQL Server 2022 if you haven’t already done so and check out the SQL Server 2022 Overview and What’s New references. There are many new features and improved functionality being added to this release.

Learn more

For more information and to get started, check out the following references:

Read What’s New in SQL Server 2022.

Additional useful resources:

The post Query Store is enabled by default in SQL Server 2022 appeared first on Microsoft SQL Server Blog.

]]>
Improve scalability with system page latch concurrency enhancements in SQL Server 2022 http://approjects.co.za/?big=en-us/sql-server/blog/2022/07/21/improve-scalability-with-system-page-latch-concurrency-enhancements-in-sql-server-2022/ Thu, 21 Jul 2022 15:00:00 +0000 Over the past several SQL Server releases, Microsoft has improved the concurrency and performance of the tempdb database.

The post Improve scalability with system page latch concurrency enhancements in SQL Server 2022 appeared first on Microsoft SQL Server Blog.

]]>
Part of the SQL Server 2022 blog series.

Over the past several SQL Server releases, Microsoft has improved the concurrency and performance of the tempdb database. In SQL Server 2022 we are addressing one of the last areas of contention by introducing concurrent global allocation map (GAM) and shared global allocation map (SGAM) updates which will give SQL Server 2022 a big improvement for scalability as tempdb is arguably the most important database in your environment.

Tempdb performance challenges

Historically, tempdb has been one of those common pain points in SQL Server. Why was it a pain point? Well, usage is one of the key reasons. By usage, we are referring to creating temp tables and other user objects, but tempdb is also used internally to spill to disk when there isn’t enough memory available for a process or there is an inaccurate estimate that causes SQL Server to spill to tempdb.

What is the tempdb database?

Tempdb is a special purpose system database, but the structure is essentially just like any other user database. As the name suggests, the tempdb database was designed for temporary storage meaning that nothing written to tempdb is intended to be persisted.
What is important to know is that while SQL Server uses the tempdb database for nearly every SQL Server workload, there is only one tempdb per SQL Server instance and tempdb is recreated every time SQL Server is restarted.

Tempdb workloads

The main difference between tempdb and other databases is the workload. With tempdb we are constantly creating and destroying objects such as temp tables. This is especially true in heavy OLTP environments where you may have many threads doing all kinds of work and if you are already seeing resource contention on the system the impact will be amplified.

So, what is stored in tempdb?

Of course, temp tables and table variables will go into tempdb – this is usually the first object type we think about. So, whenever you create a temporary table in a stored procedure or in a regular batch, that is going to go to tempdb.

Row versions go into tempdb, if you are using snapshot isolation, read committed snapshot where every time a row is modified by a specific transaction, the database engine will store a version of the previously committed image of the row in tempdb.

Hash operations will spill to tempdb. Worktables are also used for spools, cursors, sorts, and temporary large object (LOB) storage – this will all go to tempdb.

Triggers use the row version store in tempdb—this will go to tempdb.

Online index operations—if you are maintaining your indexes with the ONLINE ON keyword then we are creating temporary shadow copies in tempdb.

DBCC CHECKDB also creates shadow copies in tempdb

As you can see, there’s a lot that goes into tempdbtempdb will be used for user objects such as global or local temporary tables and indexes, stored procedures, table variables, table-valued functions, and cursors. But we also use tempdb for internal scenarios—such as spills to disk and as worktables that store intermediate results for spools and sorts.

What causes contention in tempdb?

Since tempdb is used for so many different scenarios, there is only one tempdb database per SQL Server instance, and we have started pushing towards bigger machines with larger workloads, we have started seeing concurrency issues emerge in the tempdb space in three key areas:

  1. Object allocation contention
  2. Metadata contention
  3. Temp table cache contention

What is object allocation contention?

Again, tempdb is structured just like any other database, but remember—the workloads are different in tempdb, so object allocation contention matters more because of the constant creation and destruction of objects.

On a server that is experiencing object allocation contention, you may notice severe blocking especially when the server is experiencing a heavy load. As a result, SQL Server workloads will be slowed, but the server’s CPU may appear to be underutilized. This is because the contention resides in the system metadata. To help avoid these areas of contention, a number of long-standing best practices have been recommended by SQL Server support teams.

For tempdb, one of the key best practices has always been to create multiple primary data files (mdf) at the same size and same growth rate. The reason for this was to help alleviate the object allocation contention by distributing tempdb activity across multiple partitions.

SQL Server databases must have a primary data file that uses the .mdf file extension by default and a log data file that uses the .ldf file extension. The primary tempdb data file, tempdb.mdf has key pages that track how objects are allocated in SQL Server. As we see in the image below, the table under the tempdb.mdf title represents pages.

Page types used in the data files of the tempdb database.

Page 0 is a header page, and this is true for any primary or secondary data files in SQL Server.

Page 1 is what is called a page free space (PFS) page which is used any time SQL needs to allocate space to an object. Basically, the PFS page contains info on how full the pages are, for the next 8088 pages, in the database. If SQL Server needs to add some data, SQL Server uses the PFS page to see how full the associated object is to see where the data can fit.

After 8088 pages, there is another PFS page in the same data file – it repeats itself. So, you will have more than one PFS page depending on how large the file is.

Page 2 is always the global allocation map (GAM), and this is where the extent allocation comes in as the GAM tracks when SQL Server needs to allocate a uniform extent.

An extent in SQL Server is comprised of 8 x 8KB pages, 64KB, and this is usually the unit of data allocation so if you have a table that’s larger than eight pages, any time we allocate space to that table we will create a full extent, and this is a uniform extent because all eight pages in that extent belong to that object.

So, any time SQL needs to allocate a uniform extent to an object SQL will go to the GAM page and check the availability. The GAM is a bitmap, so if the bit is 1 then that extent is available to be allocated, and if it is 0 then it is not available to be allocated. Once SQL Server has allocated the extent, it just flips the bit for that GAM page from 1 to 0 to show that it is no longer available.

The larger the file, the more GAM Pages you will have, for the GAM you will get another GAM page after 63,904 extents.

Page 3 is for the shared global allocation map (SGAM) and this page is used if SQL needs to allocate space on a mixed extent. This SGAM page tracks mixed extent usage for when an extent is being used by more than one object. So, if I have an object that is less than eight pages and I don’t want to allocate a full extent, we will use a mixed extent. By default, when a brand-new object is created the first eight pages will be allocated on a mixed extent.

The SGAM is a bitmap, so if the bit is 1 then it’s being used as a mixed extent and has space available to be allocated, then we would look for the corresponding PFS page to find the empty pages within that extent, and then we would allocate that page. The important point here is the SGAM is used in conjunction with the PFS page in order to allocate space on a mixed extent and again after about 64,000 extents you get another SGAM on the same file.

When you have multiple files in tempdb, you get another immediate header, PFS, GAM, and SGAM because all files will have the same structure and with multiple files we try and share the workload over these files.

SQL Server spreads out object allocations across files in the same filegroup based on the proportional fill algorithm. We try to keep the same percentage of free space in each file within the file group so if all the files in the file group start at the same size and they stay the same size and they have the same amount of free space then we turn that proportional fill algorithm into a round robin algorithm—each subsequent allocation will hit the next file and so on—and that was the reason we recommend having multiple files at the same size—spreading object allocations across all the files allows you to get around this object allocation bottleneck. That recommendation came out in SQL Server 2000, and it is still true in SQL Server 2019.

Multiple files equally sized is our best practice and this will stand until testing proves otherwise.

Tracking object allocation contention

Prior to SQL Server 2019, the best approach was to monitor the sys.dm_os_waiting_tasks dynamic management view and log the contention history over time.

Consider the SQL Server statement listed below:

text, letter

When looking at the wait resource, you can monitor the contention knowing that the first number refers to the database, the second number is the file id, and the last number is the page type.

This means that contention on wait resource 2:7:2 is tempdb contention as tempdb database is always database id 2, there is contention on file id #7 with GAM contention as the figure illustrates (page #1 is the PFS, #2 is the GAM, and #3 is SGAM).

These wait resource references are commonly in the format 2:1:1, 2:1:3, and so on.

Any results found on database id 2 indicate that there are requests waiting for tempdb resources and the accumulation of these requests can help database administrators narrow down the root cause of the contention.

In SQL Server 2019 we created new functions to improve tempdb troubleshooting. The sys.fn_PageResCracker dynamic management function returns the db_id, file_id, and page_id for the given page_resource value and sys.dm_db_page_info dynamic management function returns page information like page_id, file_id, index_id, object_id, and more that are present in a page header. This information is useful for troubleshooting and debugging various performance (lock and latch contention) and corruption issues.

The example query below can be used to better resolve wait resource information for any SQL Server release post SQL Server 2019:

text, letter

What is metadata contention?

The other main type of contention is called metadata contention. This type of contention is not about I/O. This contention occurs in memory when multiple threads are trying to modify the same page in memory at the same time.

You can track metadata contention using the same methods you would use to track object allocation contention, the difference is instead of the wait resource being 2:1:1, 2:1:2, 2:1:3 on the PFS, GAM, and SGAM, you are more likely to see the contention occurring on index and data pages and the page number in the wait resource will be a higher value such as 2:1:111, 2:1:118, or 2:1:122, for example.

For metadata contention it is useful to make note of page numbers greater than single digits, track the object name, and the page type description. The object names will show as system tables such as sysallocunits, syscolpars, sysjobactivity, sysscalartypes, sysschobjs, and so on.

Metadata contention was addressed in SQL Server 2019 with the memory-optimized tempdb metadata improvement.

Memory-optimized metadata tables for tempdb is basically a combination of the in-memory OLTP and the temp table metadata features. We took the system tables and the tempdb system tables—and we moved those into non-durable memory-optimized tables.

Remember, that tempdb is temporary—it gets dropped and recreated every time you restart SQL Server so there was no reason for the metadata to be durable. We converted the 12 system tables that are involved in object tracking and tempdb into memory optimized non-durable tables.

We don’t need a specialized memory-optimized file group for tempdb since it is non-durable anyway. All of it is “in memory”—no disk is needed and with memory-optimized tables there’s no latching and no locking. We can massively increase the concurrency against these metadata tables using these lock-free, latch-free data structures.

Enabling memory-optimized metadata tables does require a restart since we must configure the Hekaton DLLs. This was a big improvement in SQL Server 2019 that will eliminate a lot of the metadata contention.

Learn more about how this improvement removes the metadata contention bottleneck for tempdb-heavy workloads in our memory-optimized tempdb documentation.

Temp table cache contention

In past SQL Server releases starting with SQL Server 2005, we introduced temp table caching to get around some of the metadata caching contention. Basically, when you cache a temp table object—when you delete that table SQL Server doesn’t actually drop the metadata—we keep a cache of all the temporary objects that are used through a stored procedure and then we reuse the metadata for those objects when you call the same stored procedure with the same temp table again.

As a result, temp table caching has fewer hits to the metadata and alleviates some of that metadata contention—but not completely.

Temp table caching helped address metadata contention by allowing us to reuse tables that didn’t change between stored procedure executions. As long as the table was not altered after it was created, it would be eligible to be reused by another execution of the same stored procedure.  However, if the table is altered (by adding an index or a column, for example), then it can’t be reused and must be dropped when the stored procedure completes.

There are several different tables that we need to delete metadata from in order to completely drop the table, and this was all being done synchronously at the end of the stored procedure execution. Additionally, every time a new feature is added to SQL Server (ColumnStore indexes, temporal tables, In Memory OLTP, etc.) all these new features require new metadata to be tracked, therefore the number of system tables we need to delete from is increasing, which makes the process more impactful.

Temp table cache contention can be more prominent on larger SQL Server environments, larger core counts and as the size of the cache and the number of concurrent threads accessing the cache grows, this can introduce slower cache access as well as contention for the memory object associated with the cache.

This condition can manifest in two different ways: CMEMTHREAD waits and SOS_CACHESTORE spinlock waits. To address temp table cache contention, it is recommended to track these wait conditions for evidence and ensure you have installed the latest cumulative updates (CUs) for SQL Server.

Check out our Tech Community blog for more information on temp table caching in SQL Server.

SQL Server 2022 tempdb improvements

In SQL Server 2022 we have addressed the final common areas of contention by introducing concurrent GAM and SGAM updates similar to the Concurrent PFS updates.

We use the Global Allocation Map (GAM) pages when we are looking for uniform extents and the Shared Global Allocation Map (SGAM) pages when we are looking for mixed extents in tempdb.

In previous releases, under higher concurrent workloads we may have GAM contention where many different threads attempt to allocate extents on the same GAM page and each thread must first wait for another thread to release their UPDATE latch before they can obtain their own latch to allow them to make changes—so, we are just waiting in line.

As can be seen in the workload example below, on SQL Server 2019 there is a wall of GAM contention driving over 123,000 counts of contention with the longest wait taking 949 milliseconds.

Dashboard showing the performance of SQL Server 2019 and 123,000 latch contentions over the last five minutes.

The reason for this is that with the update latch, only one thread can modify the GAM page at a time, leading to contention. This is the primary reason we still need multiple data files and because of this contention, SQL Server throughput is decreased and workloads that require many updates to the GAM page will take longer to complete while the machine’s CPU will be underutilized. This contention is due to the workload volume and especially the use of repetitive create-and-drop operations.

Starting with SQL Server 2016, we changed the default behavior to always allocate uniform extents when creating new objects in tempdb. This helped avoid most of the SGAM contention, but we still use mixed extents for Index Allocation Map (IAM) page allocations. IAM pages are used to track all the pages that belong to an object, so every object that gets created has at least one IAM page. For most workloads, these IAM page allocations don’t cause any issues, but for extremely busy tempdb workloads with many threads of concurrent allocations, these IAM page allocations can still cause SGAM contention.

SQL Server 2022 addresses GAM and SGAM contention

SQL Server tempdb contention is near completely addressed in SQL Server 2022 and these benefits are on by default. With these improvements in SQL Server 2022 we allow concurrent updates to the GAM and SGAM under a shared latch rather than using the update latch. This improvement erases nearly all tempdb contention allowing parallel threads to be able to modify the GAM and SGAM pages as can be seen in the example below.

Dashboard showing a similar workload run on SQL Server 2022 with only 607 points of contention over the same period of time.

In the SQL Server 2022 workload example shown here, we only have 607 points of contention over the same time period compared to SQL Server 2019 with the longest wait at only 342 milliseconds. The only contention in the environment was metadata contention in this example because we did not enable the SQL Server optimized tempdb metadata improvement.

There are still possible points of metadata contention, but with SQL Server 2022, the points of contention will be rare and should not lead to any significant performance challenges.

If concurrent GAM and concurrent SGAM updates are some of the last areas of contention, do we still need the best practices to maintain multiple data files for tempdb?

Out of the gate, we are going to continue recommending the same best practices, but we may adjust if we find that it is no longer required through customer feedback.

Summary

In SQL Server 2022 we have improved tempdb performance to a factor that may need to revise the tempdb best practices that have stood true for nearly a quarter of a century.

We have greatly improved the performance of tempdb. So much of what runs on SQL Server relies on tempdb, these enhancements will likely be more than enough to make SQL Server 2022 a mandatory upgrade in most organizations.

The key point is that it’s important for DBAs to optimize tempdb performance, it’s important to track and address tempdb bottlenecks, and SQL Server has improved tempdb in every single release—SQL Server 2022 is no exception.

Next steps

System page latch concurrency enhancements in SQL Server 2022 are just one of the many benefits of migrating to SQL Server 2022.

Download the latest release of SQL Server 2022 if you haven’t already done so and check out the SQL Server 2022 Overview and What’s New references. There are many new features and improved functionality being added to this release.

Learn more

For more information and to get started, check out the following references:

Read What’s New in SQL Server 2022

Watch the Data Exposed SQL Server 2022 overview video: SQL Server 2022 Storage Engine Capabilities (Ep. 6) | Data Exposed

Additional useful resources:

Dynamic Management View (DMVs):

The post Improve scalability with system page latch concurrency enhancements in SQL Server 2022 appeared first on Microsoft SQL Server Blog.

]]>
Improve scalability with Buffer Pool Parallel Scan in SQL Server 2022 http://approjects.co.za/?big=en-us/sql-server/blog/2022/07/07/improve-scalability-with-buffer-pool-parallel-scan-in-sql-server-2022/ Thu, 07 Jul 2022 17:00:00 +0000 Buffer Pool Parallel Scan is a new feature in SQL Server 2022 that improves the scalability of several common scenarios in SQL Server.

The post Improve scalability with Buffer Pool Parallel Scan in SQL Server 2022 appeared first on Microsoft SQL Server Blog.

]]>
This blog post is part of the SQL Server 2022 blog series.

Buffer Pool Parallel Scan is a new feature in SQL Server 2022 that improves the scalability of several common scenarios in SQL Server and as a result, could significantly improve the performance of your database workloads.

The buffer pool and buffer pool scans

The buffer pool is the area of memory that SQL Server uses to cache data. All pages must be copied into the buffer pool before they can be used in a query, and it is necessary to scan the buffer pool for a number of operations.

Buffer pool scans are a common internal operation requiring iterating through potentially millions of buffers. For example, a SQL Server instance with 1 TB of buffer space requires a buffer pool scan to iterate over 130 million buffers. This is true for any SQL Server environment regardless of its buffer space usage as the buffer pool scan always iterates through the entire buffer descriptor array to find any buffers that belong to a specific database.

Operations that scan the buffer pool, especially on servers with large amounts of memory, will benefit the most from the new SQL Server 2022 Buffer Pool Parallel Scan capability.

What scenarios are affected?

Operations such as database startup/shutdown, creating a new database, file drop operations, backup/restore operations, Always On failover events, DBCC CHECKDB and DBCC Check Table, log restore operations, and other internal operations (e.g., checkpoint) will all benefit from Buffer Pool Parallel Scan.

In SQL Server 2019 and previous releases, operations that require scanning the buffer pool can be slow, especially on large memory machines such as the M-series Azure SQL virtual machine and large on-premises SQL Server environments. Even log restore operations and availability group failover operations can be impacted. Currently, there’s no way to eliminate this issue prior to SQL Server 2022, and dropping buffers using DBCC DROPCLEANBUFFERS would likely result in some degree of performance degradation as any subsequent query executions will have to reread the data from the database files increasing I/O.

What makes these buffer pool scan operations appear to be slower than expected?

Scanning the buffer pool in SQL Server 2019 and earlier releases is always a serial operation. The larger the machine, the greater the impact and it doesn’t necessarily matter about the size of the operation, this impact can be seen even when creating a new empty database.

This may seem counterintuitive that we would have this experience on larger machines, but it’s due to the serial scan process and the larger amounts of memory SQL Server has to address in regard to the buffer pool.

The scalability of scanning the buffer pool has significantly improved in SQL Server 2022 allowing customers to get the most out of their hardware investments.

SQL Server 2022 Buffer Pool Parallel Scan benefits

  • Buffer pool scans are parallelized by utilizing multiple cores.
  • Benefits both small and large database operations on larger memory machines.
  • Improvement adds buffer pool scan diagnostics to improve supportability and insights with new buffer pool scan events.
  • Customers running mission-critical OLTP, hosted service providers, and data warehouse environments will witness the most improvements in overall processing speed.

In SQL Server 2022, the Buffer Pool Parallel Scan feature improves the performance of buffer pool scan operations by utilizing multiple CPU cores. Customers running SQL Server 2022 may see up to a 10 – 30x improvement in executions which were previously slower due to serialized buffer pool scans.

Below is an example of “creating a new database” on an HPE ProLiant DL580 server with 2 TBs of memory with 1.84 TB devoted to the SQL Server buffer pool. 

Creating a new database on a SQL Server 2019 machine took 17.204 seconds whereas the same database creation script took just over 1 second on SQL Server 2022.

Creating a new database on a SQL Server 2019 machine took over 17.204 seconds where the same database creation script took just over 1 second on SQL Server 2022.

Similarly, to simulate a database failover on the same server we took a sample database, set it offline, and then brought it back online. On SQL Server 2019 this event took over 3 minutes and 15 seconds. On SQL Server 2022, the same event took just over 28 seconds.

Simulate a database failover by setting database offline, and then brought it back online. On SQL Server 2019 this event took over 3 minutes and 15 seconds. On SQL Server 2022, the same event took just over 28 seconds.

As we can see, the parallel scan feature improves the Buffer Pool scan performance of database maintenance and SQL Server workloads residing on large-memory machines by adding processing power to scan the buffer pool more efficiently, so even small operations on large machines will show a benefit.

This is a benefit that many customers will witness simply by upgrading to SQL Server 2022 as the capability is enabled by default.

“As a company with 24/7 availability requirements, we are looking forward to embracing all SQL Server 2022 features that can make database failover faster, such as Buffer Pool Parallel Scan, Parallel Redo, and Accelerated Database Recovery (ADR) enhancements. On the development side, we expect to further utilize continuous improvements in the Intelligent Query Processing package. In an environment with a lot of servers and huge databases, even when you have people and resources to deal with performance issues, each feature that can improve performance or fix performance issues automatically or without touching the code is very valuable.”

Miloš Radivojević, Head of MSSQL Database Engineering at Entain.
test

How does it work?

Functionally, buffer pool scans are parallelized by utilizing multiple cores. There will be one task per 8 million buffers (64 GB) where a serial scan will still be used if there are less than 8 million buffers.

This is one of those features we are very excited about and again, you are just going to notice that SQL Server is even faster now, especially when applications regularly scan the buffer pool on large servers of 1 TB of memory or more.

So, customers can already start looking for slow buffer pool scan events in their current SQL Server deployments?

Yes, with the latest cumulative updates, long buffer pool scans will be visible in the ERRORLOG starting in SQL Server 2016 SP3. The new diagnostics will provide an ERRORLOG message whenever a buffer pool scan takes longer than 10 seconds to complete.

graphical user interface, text, application

The error log messaging is helpful to collect when you are looking to verify if the buffer pool parallel scan feature will benefit your environment.

Additionally, SQL Server 2022 also adds new Extended Events for scan start/complete, capturing error events, and Flush Cache operations for parallelized buffer pool events.

The main event to focus on is the buffer_pool_scan_complete event which is fired when a buffer pool scan takes longer than a second to complete.

This event contains the elapsed time, parallel tasks, the number of scanned buffers, the command, and the operation. 

Next steps

Buffer Pool Parallel Scan is just one of the many benefits of migrating to SQL Server 2022.

Download the latest release of SQL Server 2022 if you haven’t already done so, and check out the SQL Server 2022 Overview and What’s New references. There are many new features and improved functionality being added to this release.

Learn more

For more information and to get started, check out the following references:

The post Improve scalability with Buffer Pool Parallel Scan in SQL Server 2022 appeared first on Microsoft SQL Server Blog.

]]>