Showing posts with label Log Shipping. Show all posts
Showing posts with label Log Shipping. Show all posts
Friday, August 5, 2011
Slow Transaction Log restore in standby mode (Log Shipping)
I am always saying that crisis situations
are good for your knowledge.
They can contribute to your
understating how things are working.
Consider the following, you have Log Shipping configured in Standby mode and
you noticed that sometimes the 'Restoring…' takes long time than usual.
So, why restoring 100-200 MB transaction log takes 30-40 minutes on the
secondary server in Logged Shipped database?
In short, due to long running transactions and Standby mode.
Now more in details:
First we need to understand what Standby means and how it works.
When you restore transaction log(s) you have 3 options:
1) WITH RECOVERY
2) WITH NORECOVERY
3) WITH STANDBY
In Log shipping, you can choose between options 2 or 3.
When restoring transaction log(s) with option "RECOVERY", the SQL Server will
undo All uncommitted transactions, the database will be accessible but you will not
be able to restore more transaction logs.
When restoring transaction log(s) with option "NORECOVERY", the SQL Server
will leave the uncommitted transactions as is and you will not be able to access the
database.
When restoring transaction log(s) with the "STANDBY" option, the SQL Server will
go over the transaction log and save all uncommitted transactions to a standby data
file with .tuf extension. (TUF stands for Transaction Undo File). The database will be
accessible and in read-only mode.
You can ask why it needs the tuf file. Well, when you restore with STANDBY mode
the database stays accessible, right? Meaning you can connect to it and view the
data (just read, because it in Read-Only mode).
But how it possible, what about uncommitted transactions?
You will not see the uncommitted data even if you read the data in
"READ Uncommitted Isolation Level". Thus is, because the SQL Server
performs UNDO for all uncommitted transactions and saves their data to
the tuf file. It simply saves the entire content of the pages that it had to undo
and in some cases the SQL Server can save not only the data rows but the all pages.
The tuf file will be used when you restore the next transaction log file.
First, the SQL Server will REDO all the uncommitted transactions that in the tuf file
(apply the content of the undo file to the data files) and only then continue to restore
the next transaction log.
Important note: the tuf file will contain the uncommitted transactions not just from
the latest transaction log but uncommitted transactions in all previous transaction
logs that have been restored.
Now after we understand how the STANDBY option works, let`s see what happening
when you rebuild indexes in the primary server and suppose that the rebuild of some
table spreads over more than one backed up transaction log. For example:
the rebuild takes 20 minutes and you backup the transaction log every 5 minutes.
I think you already got it. The secondary server upon restoring the first transaction
log that contains the start of the index rebuild will save the uncommitted data to
the tuf file, in our case, if we rebuild clustered index, it will save a lots of data to
the tuf file. And upon restoring the next transaction log, it will redo all the
uncommitted transactions saved in the tuf file and continue the restoring, but in
our case the index operation is not finished in the second transaction log file,
so SQL Server will again UNDO the rebuild index and save it to the standby file
(tuf file), now it will contain uncommitted data of the 2 transaction logs. And this
process continues and continues until the last transaction log with index rebuild
data will be restored.
I hope it helped you to understand the reason for the slow restore of the transaction
logs in the secondary logged shipped database.
Thanks for reading.
Friday, July 22, 2011
Is the % of an index fragmentation in Log shipped / Mirror database equal to the Primary/Source database?
Suppose there is Log shipping or Mirror between database X and database Y.
The mdf/ldf file(s) of the X DB and Y DB located in different storages.
I take some index in the primary db X and estimate its fragmentation by
using “dm_db_index_physical_stats” and do the estimation of that index
also in database Y.
SELECT dm.object_id AS objectid,
dm.index_id AS indexid,
QUOTENAME(i.name) AS name,
dm.partition_number AS partitionnum,
dm.avg_fragmentation_in_percent AS Fragmentation,
i.fill_factor AS [Fillfactor],
dm.page_count,
CASE WHEN dm.alloc_unit_type_desc <> 'IN_ROW_DATA'
OR (t.lob_data_space_id = 1 AND dm.index_type_desc <> 'NONCLUSTERED INDEX')
THEN 'OFF' ELSE 'ON'
END as [Online]
FROM [sys].[dm_db_index_physical_stats] (DB_ID(), NULL, NULL , NULL, 'LIMITED') dm
INNER JOIN [sys].[tables] t ON dm.object_id = t.object_id
INNER JOIN [sys].[indexes] i ON dm.object_id = i.object_id AND dm.index_id = i.index_id
WHERE dm.avg_fragmentation_in_percent > 0
AND dm.page_count > 1000
AND dm.index_id > 0
Let`s assume that the db X and Y are fully synchronized,
meaning all transactions from db X were transferred to db Y.
The question is: Is the % of the index fragmentation equal in db X and Y?
The answer is Yes.
Because every log record gets replayed on the secondary.
Every page split and every allocation is fully logged and then replayed
exactly the same on the secondary.
You may ask, what for you need to know the % of fragmentation
on secondary server. Well, first of all, estimation of the % of fragmentation
by using the “dm_db_index_physical_stats” in Production site may cause to
IO load. So we can estimate the fragmentation on the secondary server.
Secondary, it is always good to know and understand how it is working.
Here more info:
http://www.sqlsoldier.com/wp/sqlserver/whatdoesredonemean
The mdf/ldf file(s) of the X DB and Y DB located in different storages.
I take some index in the primary db X and estimate its fragmentation by
using “dm_db_index_physical_stats” and do the estimation of that index
also in database Y.
SELECT dm.object_id AS objectid,
dm.index_id AS indexid,
QUOTENAME(i.name) AS name,
dm.partition_number AS partitionnum,
dm.avg_fragmentation_in_percent AS Fragmentation,
i.fill_factor AS [Fillfactor],
dm.page_count,
CASE WHEN dm.alloc_unit_type_desc <> 'IN_ROW_DATA'
OR (t.lob_data_space_id = 1 AND dm.index_type_desc <> 'NONCLUSTERED INDEX')
THEN 'OFF' ELSE 'ON'
END as [Online]
FROM [sys].[dm_db_index_physical_stats] (DB_ID(), NULL, NULL , NULL, 'LIMITED') dm
INNER JOIN [sys].[tables] t ON dm.object_id = t.object_id
INNER JOIN [sys].[indexes] i ON dm.object_id = i.object_id AND dm.index_id = i.index_id
WHERE dm.avg_fragmentation_in_percent > 0
AND dm.page_count > 1000
AND dm.index_id > 0
Let`s assume that the db X and Y are fully synchronized,
meaning all transactions from db X were transferred to db Y.
The question is: Is the % of the index fragmentation equal in db X and Y?
The answer is Yes.
Because every log record gets replayed on the secondary.
Every page split and every allocation is fully logged and then replayed
exactly the same on the secondary.
You may ask, what for you need to know the % of fragmentation
on secondary server. Well, first of all, estimation of the % of fragmentation
by using the “dm_db_index_physical_stats” in Production site may cause to
IO load. So we can estimate the fragmentation on the secondary server.
Secondary, it is always good to know and understand how it is working.
Here more info:
http://www.sqlsoldier.com/wp/sqlserver/whatdoesredonemean
Thursday, March 18, 2010
How to create/add a new database User to Log Shipping database
Consider the situation when you need to add a new database
User and SQL Server Login to Log shipped database.
The main problem here is that the Login in the Log Shipping
SQL Server must have same SID as Login in primary
SQL Server.
Here is a simple way to add new SQL Server Login with
same SID as in source server:
1. Create a new login on the primary SQL Server and map
it to required DB.
2. Execute this command in the primary SQL Server
to generate a script for creating the SQL Server Login
with same SID: Exec sp_help_revlogin 'LoginName'
Copy the script and execute it in Log Shipping SQL
Server.
3. After the restoring the transaction logs on the Log Shipping
machine, the new added database user should be mapped
to the new created SQL Server Login and you should be able
to connect/login to the Log Shipping Server with the new Login.
User and SQL Server Login to Log shipped database.
The main problem here is that the Login in the Log Shipping
SQL Server must have same SID as Login in primary
SQL Server.
Here is a simple way to add new SQL Server Login with
same SID as in source server:
1. Create a new login on the primary SQL Server and map
it to required DB.
2. Execute this command in the primary SQL Server
to generate a script for creating the SQL Server Login
with same SID: Exec sp_help_revlogin 'LoginName'
Copy the script and execute it in Log Shipping SQL
Server.
3. After the restoring the transaction logs on the Log Shipping
machine, the new added database user should be mapped
to the new created SQL Server Login and you should be able
to connect/login to the Log Shipping Server with the new Login.
Subscribe to:
Posts (Atom)