{"id":45932,"date":"2022-09-15T08:00:00","date_gmt":"2022-09-15T15:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/sql-server\/blog\/?p=45932"},"modified":"2024-01-31T13:44:16","modified_gmt":"2024-01-31T21:44:16","slug":"intelligent-query-processing-feature-family-additions","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/sql-server\/blog\/2022\/09\/15\/intelligent-query-processing-feature-family-additions\/","title":{"rendered":"Intelligent Query Processing: feature family additions"},"content":{"rendered":"\n

Part of the SQL Server 2022 blog series<\/a><\/em>.<\/p>\n\n\n\n

SQL Server 2022<\/a> introduces a handful of new members to the growing Intelligent Query Processing (IQP) family. These additions range from expanding and improving various query performance feedback mechanisms to adding two new features\u2014parameter sensitive plans (PSPs) and optimized plan forcing. In this blog, we give a general overview of these additions<\/a>, with detailed feature-specific blogs to follow.<\/p>\n\n\n\n

Feedback features<\/h2>\n\n\n\n

SQL Server 2022 introduces two new query plan feedback features: degree of parallelism feedback and cardinality estimation feedback, and it improves the overall experience of the existing feedback feature: memory grant feedback.<\/p>\n\n\n\n

Memory grant feedback<\/h3>\n\n\n\n

Memory grant feedback is a feature that was introduced in SQL Server 2017. It looks at executions of a query and determines if the query is using more memory than it was granted (spill to disk), or if it is using way less than it was granted (possibly limiting throughput potential). As a query is executed multiple times, the system learns the amount of memory typically used by the query and adjusts the grant according to the needs of previous executions. In this release we have improved memory grant feedback in two significant ways:<\/p>\n\n\n\n

    \n
  1. Plan feedback is now persisted on disk. This means feedback is retained across cache evictions and server restarts.<\/li>\n<\/ol>\n\n\n\n
      \n
    1. We have improved the algorithm that generates feedback to look at more of the query history before advising. This prevents issues that previously occurred if a plan had widely vacillating memory needs (as might be seen with a parameter sensitive plan).<\/li>\n<\/ol>\n\n\n\n

      As a result, users can benefit from memory grant feedback in a wider range of scenarios\u2014and without requiring any re-learning of feedback after a restart, failover, or plan cache eviction.<\/p>\n\n\n\n

      Degree of parallelism feedback<\/h3>\n\n\n\n

      SQL Server has long allowed users to specify a maximum degree of parallelism (DOP) for a server or for individual queries. However, fine-tuning the most performant DOP for every query in a workload can be time-consuming and arduous, and, in some cases, has not been easily possible without underlying code change to the workload. In SQL Server 2022, DOP feedback can detect opportunities to reduce the DOP for a query. DOP feedback reduces the DOP in steps until the query is running with an amount of parallelism that optimizes CPU usage and query performance.<\/p>\n\n\n\n

      Many customers do not have the time or expertise to individually tailor the degree of parallelism for each individual query. DOP Feedback does this automatically\u2014reducing CPU and overall query time while still taking advantage of the maximum DOP that works for each individual query.<\/p>\n\n\n\n

      Cardinality estimation feedback<\/h3>\n\n\n\n

      Cardinality estimation (CE) is a process used by SQL Server to determine how many rows might be returned by a specific part of a query plan. This process incorporates a basic set of assumptions\u2014called the model\u2014to produce the estimates. However, the basic set of assumptions that works best for one query might not work as well for a different query. In previous versions of SQL Server, customers were stuck with one baseline CE model\u2014meaning that there were always some queries that did well and some that did less well but would have performed better with a different model. There have been ways to manually adjust the model, but they required code change as well as in-depth, technical investigation. In SQL Server 2022, however, we can now fine-tune the CE model in a way that is specific to an individual query. This means, that if you have a workload with some queries performing best under one set of model assumptions, and others performing best with a different set of assumptions, all queries can be automatically adjusted by CE feedback to provide the best model for each query in the workload. This is done in a do-no-harm, automated way.<\/p>\n\n\n\n

      Users no longer have to limit their workload to a single CE model. Users no longer have to manually tweak the CE model for poorly performing queries. CE Feedback will automatically choose the best CE model available for the query, on a per-query basis, and without any manual intervention.<\/p>\n\n\n\n

      Parameter sensitive plan optimization<\/h2>\n\n\n\n

      SQL Server uses parameters or parameter markers in Transact-SQL statements to increase its ability to match new Transact-SQL statements with existing, previously compiled execution plans and promote plan reuse. This process is also known as “parameter sniffing.” This is a technique used by SQL Server to “sniff” the current parameter values during query compilation and pass those parameters to the Query Optimizer so that the “sniffed” parameter values can be used to generate more efficient query plans. In a lot of scenarios, this process works well but sometimes it can go awry. Parameter sniffing (also known as parameter sensitivity) problems start to occur when a query plan that is efficient for a query given one set of actual parameters may be inefficient for a different set of parameters. For example:<\/p>\n\n\n\n

        \n
      1. You execute a query with one or more parameterized predicates.  <\/li>\n\n\n\n
      2. During query compilation, the query optimizer generates a query execution plan based on compile-time parameter values. This plan is then cached and used for subsequent executions.<\/li>\n\n\n\n
      3. You or someone else execute the same parameterized query but this time, using different runtime parameter values. The selectivity of those predicates for those runtime parameter values can be quite different from the estimated selectivity based on the compile time parameter values from when that plan was previously compiled. The compiled plan may be non-optimal for some parameter values, causing performance to suffer.<\/li>\n<\/ol>\n\n\n\n

        This often occurs when the data distribution within a table or tables is uneven (also known as data skew). There can be many other contributing factors to parameter sniffing problems, such as variable values that are unknown to the optimizer at compile time, code branching, and so on.<\/p>\n\n\n\n

        At a high level, SQL Server server assumes that at any given point in time there is one optimal plan for a query. That plan may change over time if, for example, enough data changes in a table, or column statistics get updated and change drastically, or someone simply recompiles the query. However, the assumption remains the same; the plan cache will map each query to one plan. The parameter sensitive plan optimization (PSP) feature begins to unlock SQL Server\u2019s ability to cache two or more plans for a query at the same time, with each plan being optimal for a subset of the parameter space. PSP optimization uses the histogram on column-level statistics to identify any non-uniform distributions of data and this is really where the power lies because what we are trying to optimize for is having a plan that aligns more with defined predicate cardinality ranges or “buckets.” This alignment enables the ability to create different and optimal query plans based on the data distribution. For example, in a fictitious real estate agency let\u2019s say that we had a table called PropertySearchByAgent. This table was used to keep track of real estate agents and the number of listings each agent had. If we wanted to view all of the listings for AgentId 1, we could use a parameterized query such as:<\/p>\n\n\n

        \nsp_executesql N\u2019SELECT * FROM PropertySearchByAgent WHERE AgentId = @AgentId\u2019, N\u2019@AgentId inst\u2019, 1\n<\/pre><\/div>\n\n
        \"TBD\"<\/figure>\n\n\n\n

        PSP optimization would determine, based on the predicate for this query WHERE <\/strong>AgentID equal to some value, along with the runtime cardinality of the predicate, that it will “bucketize” and create separate queries (known as query variants) to execute and cache in the plan cache. By using different query variants which map back to the original parameterized query, PSP optimization can create multiple plans for a given query. If we wanted to query the ProperySearchByAgent table for information about AgentID 3, the same parameterized query that was used previously can be used, but this time using a different parameter value of 3. Since Agent 3 has many more listings than Agent 1, PSP optimization would generate a different plan that would be more optimal for the number of listings that Agent 3 has.<\/p>\n\n\n\n

        As a result, one of the most common performance challenges in SQL Server is one step closer to being resolved completely. This is a foundational capability that will provide great benefits for mission-critical workloads and is foundational and will continue to improve over time.<\/p>\n\n\n\n

        Optimized plan forcing<\/h2>\n\n\n\n

        Query optimization and compilation is a multi-phased process of quickly generating a “good enough” query execution plan. Query execution time includes the time it takes to compile the query. This can sometimes represent a substantial portion of the overall query execution time and consume significant system resources (such as CPU and memory). To reduce compilation overhead for repeating queries, SQL Server caches query plans for re-use. However, plans can be evicted from the cache due to memory pressure or SQL Server restarts. This can lead to subsequent calls of the previously cached queries to require a new, full compilation cycle. Optimized plan forcing, has been introduced to reduce the complication time for complex queries as well as reduce CPU and memory overhead by reducing the amount of work that the optimization process may have to go through.<\/p>\n\n\n\n