Wednesday, September 11, 2019

SQL Server FILESTREAM with SQL Server Authentication

If you have an application that connects to SQL Server using SQL Server Authentication
And you want to use the FILESTREAM feature in your .NET application utilizing the SqlFileStream Class like below:

SqlFileStream sqlFileStream1 = new SqlFileStream(filePath1, transactionContext1, FileAccess.Write);

You will probably get an error saying
“Access is denied“

In all documentation it is mentioned that
“FILESTREAM requires integrated security. SQL Server authentication is not supported.”

But there is a workaround that will allow you still to use the SQL Authentication in your connection string.

To use the FILESTREAM in your application you need to do 2 steps:
First, you need to get from the DB the Logical Path Name and Transaction Context. 
(Like a token or file handler)

FileStreamColumn.PathName() As [Path]

The second step, using the path and transaction context initialize the SqlFileStream object.
These both steps should be done in a transaction scope and under the same user.
Meaning, if the application runs under Windows Account “User1”
then the first step of getting the logical path and transaction context should
be also done with the same “User1”.

What we can do is to use impersonation in SQL Server.
If the application runs with Windows Account “User1”, 
we should create this Windows Account “User1” in SQL Server as Login and DB user.

Impersonate it as below and grant to it relevant permission to the table 
with FileStream columns and execute the relevant
Stored Procedures/SQL Queries with EXECUTE AS [User1]
Use Master


According to the POC we did, this workaround allows you to stay with SQL authentication in your connection string to a database. 
Note, the impersonation cannot be granted to a windows group, only to a single account.

Source reference:

Master Data Services Add-in for Microsoft Excel, error on connecting to MDS Repository/Server

The Master Data Service is configured to have access via https protocol.
The MDS website is working with https without any issues.

But, when you try to connect to MDS Server(MDM Repository) in Excel
you get the error below:

TITLE: Master Data Services Add-in for Excel
An error occurred while making the HTTP request to https://[ServerName]/MDS/service/service.svc/bhb. This could be due to the fact that the server certificate is not configured properly with HTTP.SYS in the HTTPS case. This could also be caused by a mismatch of the security binding between the client and the server.
The underlying connection was closed: An unexpected error occurred on a send. (System)
Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. (System)
An existing connection was forcibly closed by the remote host (System)

This issue was related to TSL 1.2 settings.
To solve it, you need to set the .Net to work with strong cryptography
By setting/adding the SchUseStrongCrypto registry setting to DWORD:00000001

"SystemDefaultTlsVersions" = dword:00000001
"SchUseStrongCrypto" = dword:00000001 

The software versions in our case:
  • Windows Server 2019 
  • Excel 2019 
  • Microsoft SQL Server 2017 (RTM-CU15) 
  • Master Data Services 14.0.10000.169 
The source reference:

Thursday, August 29, 2019

UPDATE (U) locks behavior in MERGE statement vs UPDATE statement

When analyzing deadlocks with MERGE and UPDATE statements,
I have noticed that UPDATE (U) Locks behave in a different way in
an UPDATE statement and in a MERGE statement.

Deadlock graph between MERGE and UPDATE statements on the same PK value

Let`s assume you want to update 2 rows in a table 
and the search for these rows using NC index,
the question is whether SQL Server will apply U locks one by one
and release each U lock after each row or it will apply U locks 
for all matched rows and only after the update will release them.

Let`s create a test table that will have 3 fields with CL and NC index:

CREATE TABLE dbo.Customers
EmailAddress VARCHAR(50) NOT NULL,
CREATE UNIQUE NONCLUSTERED INDEX IX_Customers_EmailAddress ON dbo.Customers(EmailAddress)

INSERT INTO dbo.Customers (EmailAddress,FullName)
SELECT '' AS EmailAddress,'Dan' AS FullName
SELECT '' AS EmailAddress,'Rita' AS FullName
SELECT '' AS EmailAddress,'Greg' AS FullName
SELECT '' AS EmailAddress,'Mike' AS FullName
SELECT '' AS EmailAddress,'Bob' AS FullName

Customers Table

So we have the table with CL index on CustomerID and NC index on EmailAddress.
What we want to do is to update the FullName field of 2 rows that
will be filtered by EmailAddress column.

One way to do it is to use a simple UPDATE statement as below:

SET c.FullName='N/A'
FROM dbo.Customers AS c
WHERE c.EmailAddress IN ('','')

The second way we can use MERGE statement as below:

MERGE INTO dbo.Customers AS TGT USING ( VALUES( ''),('')) 
AS SRC (EmailAddress)
ON SRC.EmailAddress= TGT.EmailAddress

Now let`s see what locks are acquired and when they released
if we run the UPDATE statement (only 'X' and 'U' locks shown in the screenshot).
SQL Server will first go to NC index to search by EmailAddress,
apply U locks for our 2 records and then go to CL index for the UPDATE.

In the screenshot above we have value 99 in Res0 column representing NC index
and value 98 representing CL index.

According to the locks sequence above,
We can see is that U lock applied to the first NC index key (Row #1, ID =4),
then U lock applied to CL index key(Row #2, ID =6),
then U lock converted to X lock on the CL key (Row #3, ID =8),
and finally, U lock released on NC index key (Row #4, ID =11).
Then the same order for the second row.

Now let`s see the locks sequence in case of the MERGE statement:

We can see that SQL Server first acquired U locks on the both NC keys
(2 keys, one after other), Rows #1,2,3,4 with Res0 =99.
Then applied U lock on CL index key(Row #5, ID =14)
Then converted the U lock to X lock of that CL index key(Row #6, ID =16)
and then applied U lock for the second key on CL index and converted it to an X lock
(Rows #7,8, IDs =20,22)

In summary, in the case of the UPDATE statement, SQL Server
acquired U locks on NC index one by one, first for 1 record
and then for the second record and released the U lock on NC index
after updating the CL index.
And in case of the MERGE statement, SQL server first acquired U locks
for both records in the NC index and did not release them until and of the transaction.

Wednesday, August 28, 2019

Forwarded Records in a Heap

Have you ever noticed a situation when you have a heap table
(a table without a clustered index)
And when you do select * from this table you see that IO reads(logical reads) are much bigger than the total number of pages this table has?

Let`s see an example,
We will create a table, populate it with data and see how many pages it takes.
Afterward, we will do an UPDATE to increase the size of rows in the table and see how many pages it takes.

CREATE TABLE dbo.TestTable

-- Populate some data
INSERT INTO dbo.TestTable (ID, FName)
VALUES (1,'Dan')
GO 600

-- Check how many pages we have (see page_count field)
SELECT index_type_desc, page_count, avg_page_space_used_in_percent,
FROM sys.dm_db_index_physical_stats(DB_ID('Test'),OBJECT_ID('TestTable'),NULL,NULL,'detailed')

We can see that the table takes 2 pages
And if we return all data from the table we will see 2 Logical Reads = 2 Pages

SELECT * FROM dbo.TestTable

Now let`s update the Fname field to increase the size of the rows

UPDATE dbo.TestTable
SET FNAme=FName+FName

After the UPDATE, run the SELECT * and see how many logic reads we have now:

Now it does 66 Logical Reads, but the number of pages that table takes are still 2:

-- Check how many pages we have (see page_count field)
SELECT index_type_desc, page_count, avg_page_space_used_in_percent,
FROM sys.dm_db_index_physical_stats(DB_ID('Test'),OBJECT_ID('TestTable'),NULL,NULL,'detailed')

So why we are reading 66 pages?
If you look at 'forwaded_record_count' field you will see that we have 64.
Meaning we have read 2 original pages plus 64.

So,when a row in a heap is increased in size and no longer fits the original page,
SQL server will not simply move the row to a new page it will place a pointer
to a new page (in the original row).
This is done in order not to update each NC index to point to new page.

Worth to note, that when you decrease the size of a row and there is still space available in the original page, SQL will move the data to its original page and remove the forwarded pointer.

Thursday, August 22, 2019

Key Range Locks (RangeS-U) in the default Isolation Level Read Committed

As we all know, SQL Server applies Key Range locks in the Serializable isolation level to prevent Phantom Reads,
But recently, in a deadlock graph, I have encountered Key Range (RangeS-U) locks in the default Isolation Level - Read Committed.

My question was, how come, why?
Well, the answer is: foreign key constraints with Cascading Deletes.

Once you have FK defined with “ON DELETE CASCADE” and you delete records from the parent table, SQL Server will use Key Range Locks at the child table to prevent insertion of new records during the Cascading Delete operation.

So now it makes sense why you can see the Key Range Locks (RangeS-U) in the default Isolation Level Read Committed:)

Sunday, March 3, 2019

Slow Git operations in Visual Studio 2017

I have encountered a situation when every Git operation in the Visual Studio 2017 version 15.9.x in Windows 7 machine is slow.
For example: Checking out to a different branch, clicking on 'Changes', 'Sync', Fetch, Pull take time.
But, if you do these operations via command line,
all works fast.
In my case, clearing the AppData Visual Studio folder helped. Run %appdata%, delete or rename the 'Visual Studio' folder.

Thursday, January 3, 2019

Pre-Deployment scripts executed after dacpac compares and generates changes scripts

If you deploy your database with TFS,
You probably use Pre-Deployment and Post-Deployment scripts

The important thing to note is that the process is:

  1. sqlpackage compares the dacpac to the database and generates a list of changes to make
  2. Pre-Deployment scripts executed
  3. Auto generated script of changed executed (from point #1)
  4. Post-Deployment scripts executed

As you can see,
The Pre-Deployment scripts are not actually 
executed before the compare.

Let`s suppose you want to execute the Pre-Deployment
scripts before sqlpackage compares the dacpac 
to the database and generates a list of changes to make.

What you can do is to add a step that will execute the scripts
in the Pre-Deployment folder
Before executing the step of sqlpackage.
By this, the ‘compare’ will be
done after the Pre-Deployment scripts were executed.