So you have a query or
stored procedure that
running slow.What are
the main steps to identify
the reason(s) and how
to change the query?
I assume you already have the query and I am not going to
talk about how to find the slow queries/stored procedures.
I am not going to talk about the CPU/Network/Memory
bottleneck and other fancy words that generally will not help
us with performance of single query or stored procedure.
What I want is to list the main steps I and my DBA team do
when we have a slow running query/SP.
1) Examine the Execution Plan:
Take a look at the query execution plan,
a. Search for Table Scan / Index Scan, these operators
significant if you access big tables.
Check if proper indexes already exist on the tables,
find out why SQL Server chose not to use them,
for example, usage of local variables, parameters sniffing,
outdated statistics,heavily fragmented indexes can cause
to Table/Index scan.
b. Check if you have any key lookups/bookmark lookups.
In case of large table / big result set this one can lead to
performance degradation. See if you can create covering index
or include the needed columns in a non clustered index.
c. See if you have Spool operators in the execution plan.
These operators are Blocking operators,meaning they need to
get all the data at once and not row by row. An excellent
deep dive into Spool operators you can find here.
Very briefly: A spool reads the data and saves it on TempDB.
This process is used whenever the Optimizer knows that the
density of the column is high and the intermediate result is very
complex to calculate.
d. If you see that there is index seek operator, check if the SQL
Server chose the right index. There are situations when Query
Optimizer can choose "wrong" index, this can happened when
you have more than one index in the table that SQL Server
can use to fetch the data.In such cases, check your query filters
in the Where clause.
e. Compare the Actual number of rows versus Estimated.
If you see big difference in these numbers, this is a sign
that you use wrong generated execution plan.
Check statistics, consider to use Option(recompile).
Check in the Execution Plan in XML view what
were the values of input paramters when the SP was
compiled. This can help you to undestand why
SQL Server built the execution plan you see.
f) For every operator in every query, you will see
the operator`s cost. By checking this cost, you
can easily the most expensive operators in the query.
One thing to note here:
Even in the actual execution plan, you can only see
the estimated costs, and the percentage values represent
the estimated relative costs.
Therefore, If there is a big difference between the
estimated and the actual number of rows, there is a good
chance that the relative costs are misleading.
2) Analyze the output of STATISTICS IO
By setting the SET STATISTICS IO ON,
SQL Server will display information regarding the amount of disk
activity generated by Transact-SQL statements:
Table 'ProductCostHistory'. Scan count 1, logical reads 5, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0,
lob read-ahead reads 0.
In the output you need to check the following:
Scan count - Number of index or table scans(access) performed.
Logical reads - Number of pages read from the data cache.
These counters are very handy when you compare the changes
you made in the query.
Almost in all cases the reason for slowness of a query is the high
and not really needed amount of 'pages read'/ Logical reads.
You need to check if the amount of 'page reads' is really what you
expect to be, if not , here is your main problem.
By looking at these counters you can see how many times SQL
Server accessed a table, how many pages were read.
Search for high number of Logical reads and on each change of
the query compare these values.
3) Try to rewrite the query / stored procedure
Check if you use DISTINCT / UNION,
not SARGable expressions (like != ),functions on columns
in the where clause, scalar functions,wrong Index/Join hints.
Use JOINs rather than subqueries.
stored procedure that
running slow.What are
the main steps to identify
the reason(s) and how
to change the query?
I assume you already have the query and I am not going to
talk about how to find the slow queries/stored procedures.
I am not going to talk about the CPU/Network/Memory
bottleneck and other fancy words that generally will not help
us with performance of single query or stored procedure.
What I want is to list the main steps I and my DBA team do
when we have a slow running query/SP.
1) Examine the Execution Plan:
Take a look at the query execution plan,
a. Search for Table Scan / Index Scan, these operators
significant if you access big tables.
Check if proper indexes already exist on the tables,
find out why SQL Server chose not to use them,
for example, usage of local variables, parameters sniffing,
outdated statistics,heavily fragmented indexes can cause
to Table/Index scan.
b. Check if you have any key lookups/bookmark lookups.
In case of large table / big result set this one can lead to
performance degradation. See if you can create covering index
or include the needed columns in a non clustered index.
c. See if you have Spool operators in the execution plan.
These operators are Blocking operators,meaning they need to
get all the data at once and not row by row. An excellent
deep dive into Spool operators you can find here.
Very briefly: A spool reads the data and saves it on TempDB.
This process is used whenever the Optimizer knows that the
density of the column is high and the intermediate result is very
complex to calculate.
d. If you see that there is index seek operator, check if the SQL
Server chose the right index. There are situations when Query
Optimizer can choose "wrong" index, this can happened when
you have more than one index in the table that SQL Server
can use to fetch the data.In such cases, check your query filters
in the Where clause.
e. Compare the Actual number of rows versus Estimated.
If you see big difference in these numbers, this is a sign
that you use wrong generated execution plan.
Check statistics, consider to use Option(recompile).
Check in the Execution Plan in XML view what
were the values of input paramters when the SP was
compiled. This can help you to undestand why
SQL Server built the execution plan you see.
f) For every operator in every query, you will see
the operator`s cost. By checking this cost, you
can easily the most expensive operators in the query.
One thing to note here:
Even in the actual execution plan, you can only see
the estimated costs, and the percentage values represent
the estimated relative costs.
Therefore, If there is a big difference between the
estimated and the actual number of rows, there is a good
chance that the relative costs are misleading.
2) Analyze the output of STATISTICS IO
By setting the SET STATISTICS IO ON,
SQL Server will display information regarding the amount of disk
activity generated by Transact-SQL statements:
Table 'ProductCostHistory'. Scan count 1, logical reads 5, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0,
lob read-ahead reads 0.
In the output you need to check the following:
Scan count - Number of index or table scans(access) performed.
Logical reads - Number of pages read from the data cache.
These counters are very handy when you compare the changes
you made in the query.
Almost in all cases the reason for slowness of a query is the high
and not really needed amount of 'pages read'/ Logical reads.
You need to check if the amount of 'page reads' is really what you
expect to be, if not , here is your main problem.
By looking at these counters you can see how many times SQL
Server accessed a table, how many pages were read.
Search for high number of Logical reads and on each change of
the query compare these values.
3) Try to rewrite the query / stored procedure
Check if you use DISTINCT / UNION,
not SARGable expressions (like != ),functions on columns
in the where clause, scalar functions,wrong Index/Join hints.
Use JOINs rather than subqueries.
Search for SPILL. Sometimes the server uses the tempdb for executing a select query. Such state is called Spill. This happens mainly because of 2 reasons:
ReplyDelete• The server does NOT have enough memory
• The server misevaluates the needed memory to run a query
In order to detect spill warnings act as follows:
• Search for waits of IO_COMPLETION
select * from sys.dm_os_wait_stats with (nolock) where wait_type = 'IO_COMPLETION'
• When you check the value of IO_COMPLETION in sys.dm_os_wait_stats you can see that you have a spill problem, but you can't know if it is because of order by or hash join, and you can't know what SQL statement causes the problem. Extended events can show you the statement that causes the spill.
Pini.
Also,
ReplyDelete* Prefer "Inline function" value tables (ITVF) instead of "multi-statement" table valued function (MSTVF). SQL Server treats an ITVF like a VIEW and therefore calculates an execution plan using the latest statistics on the tables. A MSTVF is equivalent to populating entire contents of SELECT statement into a table variable. Thus, the compiler cannot use any table statistics on the tables in the MSTVF.
* SET statements - same statement which is executed from two machines with different SET statements (e.g. SET ANSI_NULLS, SET ANSI_WARNINGS etc) could cause to SQL engine to create two execution plans.
* Use WITH RETURNS NULL ON NULL INPUT. this hint says to the Query Processor that your function will not work with NULL passed in to the parameters. Meaning, the function will not be executed at all when NULL was provided. You gain one function call.
* Consider using SCHEMABINDING on UDF - there are a couple of derived properties associated with whether the UDF is schema-bound: SystemDataAccess and UserDataAccess which could affect query plans involving the UDF. read more: http://blogs.msdn.com/b/sqlprogrammability/archive/2006/05/12/596424.aspx
* Determinism - make scalar function deterministic (returns same result when it is invoked multiple times with the same arguments).
SELECT OBJECTPROPERTY(OBJECT_ID('FunctionName'), 'IsDeterministic');
* handle overlapping statistics
Pini