Sunday, March 11, 2012

Does it effect performance to have several publications instead of just 1. to the same sub

for example:
I am publishing about 5 tables to one subscriber.
Each table is approx 10gb each. I want to break
them up into 5 different publications in case
something happens to one of them and I need to
reinitialize. I figure If i had to , it would
only need to reinitialize that 1 10gb table.
Whereas if I had all 5 in one publication I would
then be reinitilizing all 5 10gb tables instead
of doing just one.
Am I causing more work for my distributor? I
figured it wouldn't hurt since it's the same log
reader for all the publications and same
distribution agent as well.
tia
-comb
Absolutely not!
In general when deploying very large snapshots like this you should
investigate
compressing your snapshots
bcping the data into the file system and then into the tables
Or using a DTS package with the fast insert option - this provides the best
performance.
Doing this on a live environment will require validation and some clean up
work to ensure everything is in sync.
You will get better performance by using multiple distribution agents if you
use the independent option. I find that using two agents works best. Most
than two causes performance degradation a the pull subscriber. Your results
may vary.
"Combfilter" <adsf@.asdf.com> wrote in message
news:MPG.1be892207facc1cc9896cf@.news.newsreader.co m...
> for example:
> I am publishing about 5 tables to one subscriber.
> Each table is approx 10gb each. I want to break
> them up into 5 different publications in case
> something happens to one of them and I need to
> reinitialize. I figure If i had to , it would
> only need to reinitialize that 1 10gb table.
> Whereas if I had all 5 in one publication I would
> then be reinitilizing all 5 10gb tables instead
> of doing just one.
> Am I causing more work for my distributor? I
> figured it wouldn't hurt since it's the same log
> reader for all the publications and same
> distribution agent as well.
> tia
> -comb
|||In article <OdRqiZ7uEHA.3972
@.TK2MSFTNGP15.phx.gbl>, hilary.cotter@.gmail.com
says...
> Absolutely not!
> In general when deploying very large snapshots like this you should
> investigate
> compressing your snapshots
> bcping the data into the file system and then into the tables
> Or using a DTS package with the fast insert option - this provides the best
> performance.
> Doing this on a live environment will require validation and some clean up
> work to ensure everything is in sync.
> You will get better performance by using multiple distribution agents if you
> use the independent option. I find that using two agents works best. Most
> than two causes performance degradation a the pull subscriber. Your results
> may vary.
>
> "Combfilter" <adsf@.asdf.com> wrote in message
> news:MPG.1be892207facc1cc9896cf@.news.newsreader.co m...
>
>
Thanks Hilary.
I see how to compress the snapshots, but then
when i read some of the older group discussions
about that , that it takes a lot of time to
decompress on the subscriber side and really
doesn't seem to be faster. I would like to learn
how to find a faster way to get these snapshots
over to the subscriber and have them sync up, but
I am not that skilled at sql. I have read some
of your and pauls notes about doing a backup of
the db and restore on the subscriber and some how
you can setup replication "with no sync" or
something like that but cannot find any how to
articles on how to do this.
thanks,
comb
|||In general a compressed snapshot will travel across the wire faster than an
uncompressed on, but then you will have to wait for the snapshot to be
extracted. I have found that the snapshot files I have worked with extract
relatively quickly and compress quickly as well.
I think you should bcp your data into the file system, compress them, send
them across the wire and then bcp them in. Do a no sync subscription and a
validation. Then you need to play catch up to get your subscriber in sync
with the publisher.
If you need more details post back here and Paul or myself will help you.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"Combfilter" <adsf@.asdf.com> wrote in message
news:MPG.1be979f3afe5a3769896d0@.news.newsreader.co m...[vbcol=seagreen]
> In article <OdRqiZ7uEHA.3972
> @.TK2MSFTNGP15.phx.gbl>, hilary.cotter@.gmail.com
> says...
best[vbcol=seagreen]
up[vbcol=seagreen]
you[vbcol=seagreen]
Most[vbcol=seagreen]
results
> Thanks Hilary.
> I see how to compress the snapshots, but then
> when i read some of the older group discussions
> about that , that it takes a lot of time to
> decompress on the subscriber side and really
> doesn't seem to be faster. I would like to learn
> how to find a faster way to get these snapshots
> over to the subscriber and have them sync up, but
> I am not that skilled at sql. I have read some
> of your and pauls notes about doing a backup of
> the db and restore on the subscriber and some how
> you can setup replication "with no sync" or
> something like that but cannot find any how to
> articles on how to do this.
> thanks,
> comb
|||In article <#WIdSMJvEHA.1300
@.TK2MSFTNGP14.phx.gbl>, hilary.cotter@.gmail.com
says...
> In general a compressed snapshot will travel across the wire faster than an
> uncompressed on, but then you will have to wait for the snapshot to be
> extracted. I have found that the snapshot files I have worked with extract
> relatively quickly and compress quickly as well.
> I think you should bcp your data into the file system, compress them, send
> them across the wire and then bcp them in. Do a no sync subscription and a
> validation. Then you need to play catch up to get your subscriber in sync
> with the publisher.
> If you need more details post back here and Paul or myself will help you.
>
I tried the compression method yesterday, but my
snapshot agent went suspect. I am guessing when
you say use bcp method that just means click the
check box that says "compress snapshot" under
snapshotlocation tab in the publication
properties? I saw it create a bcp file , and
then once that file was there the next step it
started was to add it to a .cab file, so I am
assuming that was correct? Not sure why my agent
went suspect.
Where do I find the no sync option? Will this
hurt me in the long run? I've searched google
groups and google up and down for a site that
will show this step by step.
Thanks for yours and pauls help in this group.
comb

No comments:

Post a Comment