Looking for ways to optimize what I just inherited.
Got a transactional replication database (3.5TB and growing) that has two subscribers. Not all data is replicated, perhaps 1TB to one subscriber, and a few hundreg GB to the other subscriber. No filtering in use.
Once a month, I have to tear down/recreate the Integration and UAT environments, each of which include this 3.5TB database, and process of creating snapshots and distributing the data is quite slow.
Network throughput between the servers is not great, and I'm trying to find the fastest way to get replication up and running.
Initializing via a SQL backup would require 3.5TB to restore, and even with backup compression, not sure how large the BAK would be, or how long it would take to copy. TDE is used, but it's SQL 2016, so backup compression should work with TDE.
Wondering if compression on the subscriber side might help. I looked at snapshot compression, but it seems archaic, referencing the fact that if files are > 2GB, it's basically ineffective, if I understood it correctly.
There are a total of 19 publications. A couple of the snapshots are ~250GB, others range from 18GB to 95GB.
Thanks in advance
Got a transactional replication database (3.5TB and growing) that has two subscribers. Not all data is replicated, perhaps 1TB to one subscriber, and a few hundreg GB to the other subscriber. No filtering in use.
Once a month, I have to tear down/recreate the Integration and UAT environments, each of which include this 3.5TB database, and process of creating snapshots and distributing the data is quite slow.
Network throughput between the servers is not great, and I'm trying to find the fastest way to get replication up and running.
Initializing via a SQL backup would require 3.5TB to restore, and even with backup compression, not sure how large the BAK would be, or how long it would take to copy. TDE is used, but it's SQL 2016, so backup compression should work with TDE.
Wondering if compression on the subscriber side might help. I looked at snapshot compression, but it seems archaic, referencing the fact that if files are > 2GB, it's basically ineffective, if I understood it correctly.
There are a total of 19 publications. A couple of the snapshots are ~250GB, others range from 18GB to 95GB.
Thanks in advance
Ned Otter MCSE Data Platform www.NedOtter.com