Monday, March 26, 2012

High-transaction replication

We are trying to set up replication of a large (over 200GB) database,
attempting to break the transactional processing and reporting apart,
obviously replicating from the processing database. (This is a temporary fix
until we can implement our data warehouse.)
There are approximately 70 tables that will need to be replicated for this
to work for us, and I have broken these tables into logical grouping by
publication. We ran into some errors initially ("The process could not
execute 'sp_replcmds'..." errors), and found some references to assist with
this.
The next issue we have run into and are struggling with now are errors in
the log reader stating "The process could not execute
'sp_MSadd_repl_commands27hp'..." The descriptions reference that there was
deadlocking and the logreader was killed. The Distribution cleanup agent
appears to be the other process that was locking the table(s), with the job
taking up to 58 seconds to complete (48k transactions and 60k statements
deleted). I took and modified the schedule of the cleanup job to run every
minute between 6 and 6, and every 5 minutes during the night - thinking there
would be fewer transactions to delete, and allowing the table to be locked
for a shorter period of time. My thinking was correct, but the deadlocking
still happens, but recovers quicker than previously.
The worry we have is that we have not created the publications for all the
tables, and are already sending over 500k transactions/hour through. Once we
get all the tables back up and going we are projecting about another 500k
transactions, and are worried about the deadlocking. Are there any
ideas/settings we should be looking at to enable SQL to handle transactional
replication of 1 million+ transactions an hour, or are we really looking for
SQL to do what it cannot handle?
Thanks in advance for any ideas!
You should migrate to a remote distributor. You should also look at
replicating the execution of stored procedures. This will improve the
performance of your solution dramatically.
I would also look at decreasing your transaction retention period to perhaps
a day.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"Rob C" <RobC@.discussions.microsoft.com> wrote in message
news:1A4C62BB-A2E9-445D-8FE7-A9339E2EC209@.microsoft.com...
> We are trying to set up replication of a large (over 200GB) database,
> attempting to break the transactional processing and reporting apart,
> obviously replicating from the processing database. (This is a temporary
fix
> until we can implement our data warehouse.)
> There are approximately 70 tables that will need to be replicated for this
> to work for us, and I have broken these tables into logical grouping by
> publication. We ran into some errors initially ("The process could not
> execute 'sp_replcmds'..." errors), and found some references to assist
with
> this.
> The next issue we have run into and are struggling with now are errors in
> the log reader stating "The process could not execute
> 'sp_MSadd_repl_commands27hp'..." The descriptions reference that there
was
> deadlocking and the logreader was killed. The Distribution cleanup agent
> appears to be the other process that was locking the table(s), with the
job
> taking up to 58 seconds to complete (48k transactions and 60k statements
> deleted). I took and modified the schedule of the cleanup job to run
every
> minute between 6 and 6, and every 5 minutes during the night - thinking
there
> would be fewer transactions to delete, and allowing the table to be locked
> for a shorter period of time. My thinking was correct, but the
deadlocking
> still happens, but recovers quicker than previously.
> The worry we have is that we have not created the publications for all the
> tables, and are already sending over 500k transactions/hour through. Once
we
> get all the tables back up and going we are projecting about another 500k
> transactions, and are worried about the deadlocking. Are there any
> ideas/settings we should be looking at to enable SQL to handle
transactional
> replication of 1 million+ transactions an hour, or are we really looking
for
> SQL to do what it cannot handle?
> Thanks in advance for any ideas!
|||You can also take a look the Microsoft White Paper Replication Performance
Tuning. I have found some very interesting results with much larger databases
than that with implementing some of the modifications in there particulairly
the (-maxcmdsintran ) switch in the log reader agent and creating a
performance profile for the distribution agent. I can give you examples or
better documentation if you require. Please email - Richard.Hale@.MM-Games.com
with any questions
"Rob C" wrote:

> We are trying to set up replication of a large (over 200GB) database,
> attempting to break the transactional processing and reporting apart,
> obviously replicating from the processing database. (This is a temporary fix
> until we can implement our data warehouse.)
> There are approximately 70 tables that will need to be replicated for this
> to work for us, and I have broken these tables into logical grouping by
> publication. We ran into some errors initially ("The process could not
> execute 'sp_replcmds'..." errors), and found some references to assist with
> this.
> The next issue we have run into and are struggling with now are errors in
> the log reader stating "The process could not execute
> 'sp_MSadd_repl_commands27hp'..." The descriptions reference that there was
> deadlocking and the logreader was killed. The Distribution cleanup agent
> appears to be the other process that was locking the table(s), with the job
> taking up to 58 seconds to complete (48k transactions and 60k statements
> deleted). I took and modified the schedule of the cleanup job to run every
> minute between 6 and 6, and every 5 minutes during the night - thinking there
> would be fewer transactions to delete, and allowing the table to be locked
> for a shorter period of time. My thinking was correct, but the deadlocking
> still happens, but recovers quicker than previously.
> The worry we have is that we have not created the publications for all the
> tables, and are already sending over 500k transactions/hour through. Once we
> get all the tables back up and going we are projecting about another 500k
> transactions, and are worried about the deadlocking. Are there any
> ideas/settings we should be looking at to enable SQL to handle transactional
> replication of 1 million+ transactions an hour, or are we really looking for
> SQL to do what it cannot handle?
> Thanks in advance for any ideas!

No comments:

Post a Comment