{"id":45932,"date":"2022-09-15T08:00:00","date_gmt":"2022-09-15T15:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/sql-server\/blog\/?p=45932"},"modified":"2024-01-31T13:44:16","modified_gmt":"2024-01-31T21:44:16","slug":"intelligent-query-processing-feature-family-additions","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/sql-server\/blog\/2022\/09\/15\/intelligent-query-processing-feature-family-additions\/","title":{"rendered":"Intelligent Query Processing: feature family additions"},"content":{"rendered":"\n
Part of the SQL Server 2022 blog series<\/a><\/em>.<\/p>\n\n\n\n SQL Server 2022<\/a> introduces a handful of new members to the growing Intelligent Query Processing (IQP) family. These additions range from expanding and improving various query performance feedback mechanisms to adding two new features\u2014parameter sensitive plans (PSPs) and optimized plan forcing. In this blog, we give a general overview of these additions<\/a>, with detailed feature-specific blogs to follow.<\/p>\n\n\n\n SQL Server 2022 introduces two new query plan feedback features: degree of parallelism feedback and cardinality estimation feedback, and it improves the overall experience of the existing feedback feature: memory grant feedback.<\/p>\n\n\n\n Memory grant feedback is a feature that was introduced in SQL Server 2017. It looks at executions of a query and determines if the query is using more memory than it was granted (spill to disk), or if it is using way less than it was granted (possibly limiting throughput potential). As a query is executed multiple times, the system learns the amount of memory typically used by the query and adjusts the grant according to the needs of previous executions. In this release we have improved memory grant feedback in two significant ways:<\/p>\n\n\n\n As a result, users can benefit from memory grant feedback in a wider range of scenarios\u2014and without requiring any re-learning of feedback after a restart, failover, or plan cache eviction.<\/p>\n\n\n\n SQL Server has long allowed users to specify a maximum degree of parallelism (DOP) for a server or for individual queries. However, fine-tuning the most performant DOP for every query in a workload can be time-consuming and arduous, and, in some cases, has not been easily possible without underlying code change to the workload. In SQL Server 2022, DOP feedback can detect opportunities to reduce the DOP for a query. DOP feedback reduces the DOP in steps until the query is running with an amount of parallelism that optimizes CPU usage and query performance.<\/p>\n\n\n\n Many customers do not have the time or expertise to individually tailor the degree of parallelism for each individual query. DOP Feedback does this automatically\u2014reducing CPU and overall query time while still taking advantage of the maximum DOP that works for each individual query.<\/p>\n\n\n\n Cardinality estimation (CE) is a process used by SQL Server to determine how many rows might be returned by a specific part of a query plan. This process incorporates a basic set of assumptions\u2014called the model\u2014to produce the estimates. However, the basic set of assumptions that works best for one query might not work as well for a different query. In previous versions of SQL Server, customers were stuck with one baseline CE model\u2014meaning that there were always some queries that did well and some that did less well but would have performed better with a different model. There have been ways to manually adjust the model, but they required code change as well as in-depth, technical investigation. In SQL Server 2022, however, we can now fine-tune the CE model in a way that is specific to an individual query. This means, that if you have a workload with some queries performing best under one set of model assumptions, and others performing best with a different set of assumptions, all queries can be automatically adjusted by CE feedback to provide the best model for each query in the workload. This is done in a do-no-harm, automated way.<\/p>\n\n\n\n Users no longer have to limit their workload to a single CE model. Users no longer have to manually tweak the CE model for poorly performing queries. CE Feedback will automatically choose the best CE model available for the query, on a per-query basis, and without any manual intervention.<\/p>\n\n\n\n SQL Server uses parameters or parameter markers in Transact-SQL statements to increase its ability to match new Transact-SQL statements with existing, previously compiled execution plans and promote plan reuse. This process is also known as “parameter sniffing.” This is a technique used by SQL Server to “sniff” the current parameter values during query compilation and pass those parameters to the Query Optimizer so that the “sniffed” parameter values can be used to generate more efficient query plans. In a lot of scenarios, this process works well but sometimes it can go awry. Parameter sniffing (also known as parameter sensitivity) problems start to occur when a query plan that is efficient for a query given one set of actual parameters may be inefficient for a different set of parameters. For example:<\/p>\n\n\n\n This often occurs when the data distribution within a table or tables is uneven (also known as data skew). There can be many other contributing factors to parameter sniffing problems, such as variable values that are unknown to the optimizer at compile time, code branching, and so on.<\/p>\n\n\n\n At a high level, SQL Server server assumes that at any given point in time there is one optimal plan for a query. That plan may change over time if, for example, enough data changes in a table, or column statistics get updated and change drastically, or someone simply recompiles the query. However, the assumption remains the same; the plan cache will map each query to one plan. The parameter sensitive plan optimization (PSP) feature begins to unlock SQL Server\u2019s ability to cache two or more plans for a query at the same time, with each plan being optimal for a subset of the parameter space. PSP optimization uses the histogram on column-level statistics to identify any non-uniform distributions of data and this is really where the power lies because what we are trying to optimize for is having a plan that aligns more with defined predicate cardinality ranges or “buckets.” This alignment enables the ability to create different and optimal query plans based on the data distribution. For example, in a fictitious real estate agency let\u2019s say that we had a table called PropertySearchByAgent. This table was used to keep track of real estate agents and the number of listings each agent had. If we wanted to view all of the listings for AgentId 1, we could use a parameterized query such as:<\/p>\n\n\nFeedback features<\/h2>\n\n\n\n
Memory grant feedback<\/h3>\n\n\n\n
\n
\n
Degree of parallelism feedback<\/h3>\n\n\n\n
Cardinality estimation feedback<\/h3>\n\n\n\n
Parameter sensitive plan optimization<\/h2>\n\n\n\n
\n
\nsp_executesql N\u2019SELECT * FROM PropertySearchByAgent WHERE AgentId = @AgentId\u2019, N\u2019@AgentId inst\u2019, 1\n<\/pre><\/div>\n\n