Pages

Showing posts with label T-SQL. Show all posts
Showing posts with label T-SQL. Show all posts

Saturday, June 9, 2012

CHECKSUM and OPTION(RECOMPILE)


Working with CHECKSUM
functions, I encountered a situation
when the result of the
CHECKSUM function is
effected by OPTION(RECOMPILE)


Please see following example:




DECLARE @a VARCHAR(5),@b VARCHAR(5)
SET @a='a'
SET @b='b'

SELECT CHECKSUM(@a,@b) as Result






SELECT CHECKSUM(@b,@a) as Result

 



SELECT CHECKSUM(@a,@b) as Result
OPTION(RECOMPILE)




As you can see the result of CHECKSUM(@a,@b) without recompile
and with recompile is different, when using option(recompile)
we see that SQL Server change the order of the variables.

Do you know why?
If yes, Please leave comments.

Thursday, March 8, 2012

@@Trancount in DML statement


I was debuging something related to transactions
and saw something strange from first sight.

You know that @@TRANCOUNT returns
the number of BEGIN TRANSACTION
statements that have occurred on the current
connection, right?

Well, check this situation:


CREATE TABLE dbo.TempTbl (Info VARCHAR(10))
GO

INSERT INTO dbo.TempTbl(Info)
SELECT @@TRANCOUNT
GO

SELECT * FROM dbo.TempTbl
 
Here is the resut set:

 
 
 
 
 
So why the value of the @@TRANCOUNT is 2?
I did not use BEGIN TRANSACTION.
I undestand that INSERT is implicit transaction and
therefore the @@TRANCOUNT can be 1, by why it equals to 2?


The reason is that the @@TRANCOUNT counts the user transaction
as well as the inner transaction.So in DML statement it will be always at least 2.

Here is more, when using a variable table the @@TRANCOUNT is 0:
















This is because local variables(and tables) are not effected
by transaction(s).


For more info:
http://madebysql.blogspot.com/2010/04/local-variables-and-transaction-or-all.html
http://blogs.msdn.com/b/jenss/archive/2010/07/10/usage-of-trancount-in-dml-statements.aspx

Friday, November 4, 2011

How to debug/troubleshoot a SP in production environment?



There are times when you need to debug
some stored procedure(s) in Production
environment or to make a SP to behave
differently when you execute it via the SSMS
versus case when the SP executed by an application.
Or to make SP to behave differently from requests
that come from different version of an application
or requests that come from specific IPs and etc.
Or you may want to know what values were
provided for the input parameters of the SP?


Here some usefull SQL functions that you can use in the SPs to achieve the above:

1) APP_NAME() - Returns the application name for the current session if set by
   the application. You can set it in the connection string of the application by
   including this: "Application Name=AppName".
   By using this function you can change a SP to do X if it executed from Server A
   and to do Y if it executed by Server Y or if it executed via the SSMS.
   Simply set the application name for different value in Server X and Y.
   By using this function you can also upload SPs changes without any downtime
   and with full backward compatibility.

2) CONNECTIONPROPERTY('client_net_address') - Returns the Client IP address.

3) CONNECTIONPROPERTY('local_net_address'- Returns the IP address of the server.
   For example, if you connect to production SQL server from the SSMS in QA/DEV
   SQL Server then it will return the IP of that QA/DEV SQL Server machine.

By using these functions listed above you can add logic (If or CASE statements)
to the SP and you can troubleshoot/debug the SP without running the SQL Profiler.

SELECT APP_NAME() AS Application_Name,
               CONNECTIONPROPERTY('local_net_address') AS local_net_address,
               CONNECTIONPROPERTY('client_net_address') AS client_net_address
GO

Tuesday, June 7, 2011

@@Rowcount and Aggregative functions

Please note the following,

When using aggregative function(s) in a SELECT that not returns any row
you will always get one row in the result set and @@RowCount will be always
be equal to 1.

Let`s see an example:

CREATE TABLE TestTable (RowID INT NOT NULL)
GO

-- Fetch rows without aggregative function in the Select
SELECT * FROM TestTable

SELECT @@ROWCOUNT as RowsCount
















As you see, the result of @@ROWCOUNT  is 0 as expected,
Now let`s see what will happened if we add a aggregative function to the
SELECT, for example, COUNT/AVG and etc.

CREATE TABLE TestTable (RowID INT NOT NULL)

GO

-- Fetch rows with aggregative function in the Select
SELECT AVG(RowID) FROM TestTable
SELECT @@ROWCOUNT as RowsCount












 
 
 
 
The @@RowCount is 1, even the table is empty.
So, when writing logic that depends on the result of @@RowCount,
keep in mind the case described above.

Friday, May 6, 2011

Interesting index option IGNORE_DUP_KEY

In one of migration scripts, we needed
to create a table to hold distinct values.
In the population process, there was no
option to specify 'SELECT DISTINCT',
because the population was done by batches.
Instead of checking if a value already exist
in a target table, we created an unique index
with IGNORE_DUP_KEY option ON.


Additional example will be related to ETL process,
For example, if you have different sources of data
Or same row can come more than once from one source,
You can use this index option on ODS/Delta tables.

So, what this option does? When this index option turned off (by default) and
you try to insert duplicate value to a column with uniquie index, you will get an
error something like this:
Msg 2601, Level 14, State 1, Line 1
Cannot insert duplicate key row .........

In case when you create an index(or alter it) with IGNORE_DUP_KEY=ON,
the duplicate values will be not inserted and you will get warning message:
Duplicate key was ignored.

Here the example:
CREATE TABLE TestTbl(Name VARCHAR(10))
GO

CREATE UNIQUE NONCLUSTERED INDEX [IX_Name]
ON TestTbl (Name ASC) WITH (IGNORE_DUP_KEY = ON)
GO

INSERT INTO TestTbl(Name)
SELECT 'Dan' UNION ALL
SELECT 'Tim' UNION ALL
SELECT 'Dan'
GO
SELECT * FROM TestTbl

Note: On UPDATE command this feature do NOT work,
you will get an error. So be aware of this option,
Thanks for reading.

Friday, April 1, 2011

Order of Joins, FORCE ORDER Hint

Well, as you may know SQL Server can build the execution plan
with different order of joins than in your query. This is done as
part of the query execution optimization.

I had a situation when a query did not performed well, the
execution plan was not optimal. In such cases, what you
should do, is to check whether the statistics are updated,
check if SQL Server uses right indexes,check for missing
indexes and maybe you need to rewrite your query.

In my situation, I had no time
to rewrite the query, but we
noticed that the order of tables
that SQL choses to accessed
was not optimal.

The quick and dirty solution was to add 'Force Order' hint to the query.
What is does? It specifies that the join order indicated by the query
syntax should be preserved during query optimization.

In most cases the SQL Server optimizer will do great job.
So you should use this hint only in exceptional cases.

Friday, March 18, 2011

Search for a string in SQL Agent Job Steps



When you change schema of tables or
when you change a SP, you probably need to
to find out whether the SP used in SQL Server
Job(s) or there are inline SQL statements in
the Jobs that use your table(s).






How to do it quickly?

SELECT  js.database_name as DatabaseName,
                 jobs.Name as JobName,
                 js.step_id as StepID,
                 js.step_name as StepName,
                 js.command as StepCommand
FROM     msdb.dbo.sysjobs as jobs
                INNER JOIN msdb.dbo.sysjobsteps as js ON jobs.job_id = js.job_id
WHERE  jobs.[enabled] = 1 AND js.command LIKE  '%XXXXXXXXX%'
ORDER BY jobs.Name,js.step_id

Thanks for reading.

Saturday, December 4, 2010

How to drop extended properties of all columns in a table

When synchronizing
database schema
between DBs, you may
also need to synchronize the
 extended properties.


Dropping extended properties of all columns in
a table one by one is a lot of work.

Therefore you can use the script below that will create
drop statements of extended properties for all columns
in defined table.All you have to do is to execute
the result set you got from this script.

SET NOCOUNT ONSELECT 'EXEC sp_dropextendedproperty
              @name = ''' + Extp.name + '''
             ,@level0type = ''schema''
             ,@level0name = ' + OBJECT_SCHEMA_NAME(Extp.major_id) + '
             ,@level1type = ''table''
             ,@level1name = ''' + OBJECT_NAME(Extp.major_id) + '''
             ,@level2type = ''column''
             ,@level2name = ''' + Col.name + '''
             GO'
FROM sys.columns AS Col
            LEFT JOIN sys.extended_properties AS Extp ON Col.object_id = Extp.major_id
                                                                    AND Col.column_id = Extp.minor_id
WHERE Extp.class_desc = 'OBJECT_OR_COLUMN'
               AND OBJECT_NAME(Col.object_id)='YourTableName'

Before execute this script in SSMS,
make sure you switched to 'Results to Text'







Note: In case you copied  the script above and
pasted it in SSMS and the results are cut, change the
following to 1024, for example.

















Thanks for reading.

Friday, October 22, 2010

Report progress from a batch/script/stored procedure

There are situations when you have to update/delete
high volume of data in a table(s).For example, moving
data to another database (archiving data).

Probably you will do it by chunks, by using some loop
in order not to hurt the system(in our case production).

In such cases it will helpful to see if you batch/SP did not
stuck, like you see % of completeness when you do backup.



You can use PRINT to report progress, but the PRINT
not sends the message immediately to the client.
Anything sent to PRINT will be buffered, and not released
until the buffer is full, or the query completes.
This buffer is around 8KB in size.

We will use RAISERROR with severity 10 and NOWAIT
option.If the severity from 0 to 10, the SQL Server do
not raise an error, it returns a plan message.

RAISERROR('ProgressMessage',10,25) WITH NOWAIT

Let`s take a simple example where we want to see how many
rows were affected (it can be counter of more than one
operation) in each iteration:

DECLARE @RowsUpdated int,
                    @ProgressMessage varchar(50)

SET @RowsUpdated = 1
SET @ProgressMessage=''

WHILE @RowsUpdated>0
BEGIN
      
      'you DML statement(s)' 

      SET @RowsUpdated = @@ROWCOUNT
      SET @ProgressMessage= CAST(@RowsUpdated as VARCHAR(50))

      RAISERROR(@ProgressMessage,10,25) WITH NOWAIT
END

Thanks for reading

Saturday, September 4, 2010

SELECT...INTO is faster than CREATE & INSERT


Do you use SELECT..INTO
in your code / SPs?
Or you prefer to create
a new table and then use
INSERT INTO?



Anyway, you should be aware of the following:

The SELECT ... INTO is minimally logged operation in case
your database under the simple recovery or bulk-logged
recovery model. See more info here.

Therefore,with minimal logging, using the SELECT… INTO
statement can be more efficient than creating a table and then
populating the table with an INSERT statement.

In most of the cases we use Temp tables when we use
SELECT ... INTO.

The temp table created in the TEMPDB, right?
And as you know the recovery mode of the TEMPDB is simple.
Therefore the SQL Server will use minimal logging for our new table.

I highly recommend to you to read the Chintak`s post about this
subject. He provides real examples.

Saturday, August 28, 2010

Creating SP in Master database

So you want to create a stored procedure
in the master database in order to be able
to access it from any database in the server
without specifing the fully qualified name
and take an advantage of dynamic contex,
meaning the procedure will be executed in
the context of the calling database.

Ok, first of all, give prefix "sp_" and another thing you should
know:

You have to mark the SP as System in order to make the
dynamic contex thing working.

EXEC sp_MS_marksystemobject 'sp_spname'

Thursday, August 12, 2010

No more soup for you! - Cool SQL Error


I had situation, where we
wanted to see how the
SQL Server can deal
with high number of
items in the IN clause,
For instance, we had
something like this in a SP:



SELECT  Firstname
FROM    dbo.Customers

WHERE  CustomerID IN (3243,32432,324564,45645,767,....xxxx)

The number of CustomerIDs in the IN clause about 50,000.
And they were passed as input parameter.

When we tried to execute the SP, we got the following error:

Error: 8623, Severity: 16, State: 1 The query processor ran out of internal
resources and could not produce a query plan. This is a rare event and only
expected for extremely complex queries or queries that reference a very large
number of tables or partitions. Please simplify the query. If you believe you
have received this message in error, contact Customer Support Services for
more information.

It seems, that the SQL Server had a problem to compile the SP!

Saturday, April 24, 2010

Local Variables and Transaction (or All at Once Concept)

Let me start with the example of Order of variable
assignment in SELECT statement:

DECLARE @a int,@b int
SET @a=1
SET @b=0

SELECT @a=@a+1,@b=@a
SELECT @a as a,@b as b

What will be the values of @a and @b?

Well, @a will be equal to 2 and @b will be also 2.




You may ask, so what is special about it?

OK, do you know how to explain the following example,
where we swap values of 2 columns in a table.

CREATE TABLE dbo.TestTable
(
Col1 int,
Col2 int
)
INSERT INTO dbo.TestTable(Col1,Col2)
SELECT 1 as Col1,2 as Col2

UPDATE dbo.TestTable

SET Col1=Col2,
Col2=Col1

SELECT * FROM dbo.TestTable






As you can see, in the UPDATE statement above,
the SQL Server`s "All At Once" concept is working,
meaning the value of Col2 is not assigned to Col1
and the value of Col2 is not assigned to Col1 right at the
beginning, but assigned only in the end ,after committing
the results.

Explanation:
The issue is that local variables(and tables) are not effected 
by transaction(s) !
This means that the evaluation/calculation of the values is done
immediately and from left to right, as you saw in the first example.
And here one more example:

DECLARE @str varchar(10)
SET @Str='Test'

BEGIN TRAN

SET @Str='Changed'
PRINT @str
ROLLBACK TRAN
PRINT @str
But this is not the case with an update statement that swaps
columns values (an single UPDATE statement considered
as implicit transaction)

In this case, the results are not committed until the transaction
is committed, in other words SQL Server does not change
the values in memory. Therefore it is not matter what is the
order of the update, I mean, there is no difference between:

UPDATE dbo.TestTable
SET Col1=Col2,
Col2=Col1
and 
UPDATE dbo.TestTable
SET Col2=Col1,
Col1=Col2

Again, this is because an UPDATE is implicit transaction.

Here another example of SQL Server`s All-At-Once approach:
Suppose you have a table with 50 million rows and you
want to update some datetime column in all rows.
Probably you will write something like this:

UPDATE  SomeTable
SET          dtcolumn=GETDATE()

OK, as the result of this update all rows will have the same
datetime,in other words, you will not see a row with different
datetime, all updated rows will have same precision, same ms/seconds.
Am I right?

Now, did you ever thought why it is this way? The Update
statement can take some time, couple of ms or second or minutes,right.
And the answer is: because it is (implicit) transaction.

So, pay extra attention when you decide to use local variables
or local variable tables, they are affected by transactions.

Friday, February 26, 2010

An Update query that doesn`t work

Why the following UPDATE doesn`t work, I asked my colleague.
And did not get any answer.

Consider the following example, you have a table that holds
companies and a table that holds products per company.

SELECT  *
FROM    dbo.Companies





SELECT *
FROM   dbo.Products











What I want to do is to update the "TotalProductsCost"
in the " Companies" table by the following query:

UPDATE c
SET         c.TotalProductsCost=c.TotalProductsCost+p.Price
FROM    dbo.Companies as c
INNER JOIN dbo.Products as p ON c.CompanyID = p.CompanyID

If we fetch the row from the "Companies" table, we will get:






As you can see the update statement did not work as I thought
it would. The Update statement took only the first row
(only one product) of each company and not all products.

What I thought is, the Update statement will go over row by row
and take all products of each company as the Select statement below:

SELECT c.CompanyName,p.ProductID,p.Price
FROM    dbo.Companies as c
INNER JOIN dbo.Products as p ON c.CompanyID = p.CompanyID









If you have an explanation, I will be glad to get it.

Saturday, February 6, 2010

Convert FLOAT to VARCHAR

If you try to convert a float variable to varchar
by using CAST  or CONVERT, you will get an error message:

DECLARE @Float FLOAT
SET @Float=1.123456789
SELECT CAST(@Float AS VARCHAR(4))

Msg 232, Level 16, State 2, Line 4
Arithmetic overflow error for type varchar, value = 1.123457.

The problem is that the CAST and CONVERT will convert
float/real value to varchar with length 6 and above.

In our case the STR function can help:
SELECT STR(@Float,6,5)

See the BOL for more information.

Friday, January 15, 2010

Declaring variable inside 'IF' statement

If you declare some variable inside IF statement and the code
inside the IF statement is not going to be executed, can you use
the declared variable outside the IF statement?(after the IF statement).

Well, you can. The value of this variable will be NULL as without
assigning a value.

DECLARE @Flag bit

IF @Flag=1
BEGIN
    DECLARE @Str varchar(20)
    SET @Str='TestVariable'
END

SELECT @Str as [str]
------------------------------------------------------
NULL

Wednesday, December 16, 2009

Option (Recompile) inside of a "if exists" statement

Suppose you have a bad query plan of a query in "If Exists" statement.
One way to fix it is to tell to the SQL Server to recompile the query
every time the SP executed, by providing Option(Recompile) query hint.

But if you use the Option(Recompile) hint inside of a "If Exists"
statement, you will get a syntax error:

IF EXISTS ( SELECT 1
                  FROM dbo.SomeTable
                  WHERE SomeColumn=SomeValue
                  OPTION(Recompile)
                 )

PRINT 'Yes'
ELSE
PRINT 'NO'
-----------------------------------------------------------------------
Incorrect syntax near the keyword 'OPTION'.

I do not understand why,if you have an explanation,
please write it in the comments.

There is a workaround(provided by Erland Sommarskog)
for this issue:
DECLARE @Flag BIT
SELECT TOP(1) @Flag=1
FROM    dbo.SomeTable
WHERE SomeColumn=SomeValueOPTION(Recompile)
IF @Flag = 1
PRINT 'Yes'
ELSE
PRINT 'No'

Friday, October 9, 2009

Case statement with multiple conditions in Where clause


In some situations you need to filter data from a table by using CASE
statement.
Here is an example of a CASE statement with multiple conditions/checks
in the WHERE clause.

Suppose you have the following table:






DECLARE @SomeFilter CHAR(1)SET @SomeFilter='Y'
SELECT *
FROM Customers
WHERE (CASE WHEN @SomeFilter='Y' AND VIPLevel>2 AND Age>20 THEN 1
                        WHEN @SomeFilter='N' AND CustomerID>2 AND VIPLevel>3 THEN 1
              ELSE 0 END )=1

Conversion failed when converting the....

When using a CASE statement you should pay attention
to the data types of the return values.They all should be implicitly
convertible or have same data type.This is relevant  also to branches
of a code that will be not executed.

When you mix numbers and strings in the return values of the CASE,
SQL Server will try to convert strings to numbers.This is according to
data type precedence.

Take a look on the following example:
---------------------------------------------------------------------------
DECLARE @str VARCHAR(10)
SET @str='Test'
SELECT CASE  WHEN @str='Test' THEN 'Some String'
                        WHEN @str='Cool' THEN 1
                        ELSE 0 END
---------------------------------------------------------------------------
Msg 245, Level 16, State 1, Line 4
C
onversion failed when converting the varchar value 'Some String' to data type int.

So, the soution in our case is to explicitly convet the 1 and 0
values to string :
DECLARE @str VARCHAR(10)
SET @str='Test'
SELECT CASE WHEN @str='Test' THEN 'Some String'
                           WHEN @str='Cool' THEN '1'
                           ELSE '0'

Thursday, September 10, 2009

How to find maximum/minimum value across multiple columns in a single row

In some situations you need to find a maximum/minimum
or average value from more than 2 columns in a table.
We can use a subquery with UNION to do this.

For example:
CREATE TABLE [dbo].[TestTable]
(
[RowID] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](50) NOT NULL,
[Value1] [int] NOT NULL,
[Value2] [int] NOT NULL,
[Value3] [int] NOT NULL
)
GO
INSERT INTO TestTable(Name,Value1,Value2,Value3)
SELECT 'FirstRow',1,2,3
UNION ALL
SELECT 'SecondRow',2,1,7
UNION ALL
SELECT 'ThirdRow',8,9,5
GO

RowID    Name     Value1        Value2       Value3
-----     ----------        -----------     -----------     -----------
   1         FirstRow      1               2                 3
   2        SecondRow  2               1                 7
   3        ThirdRow      8               9                 5

Here is the query:

SELECT *,( SELECT MAX([Value])
                  FROM ( SELECT Value1 AS [Value]
                               UNION ALL
                              SELECT Value2
                              UNION ALL
                              SELECT Value3
                           ) AS UnionSet
                ) AS MaxValue
FROM TestTable
GO