I have a database with 170 tables required to have full merge. My question is what's the best way to handle the other 953 that are required for the application to run, but don't need to merge data. Today after every application upgrade we take a backup, remove large tables, zip onto shared drive. Before initial sync the client downloads the backup and it's restored to their local machine. Then they run replication and get the snapshot of 170 (filtered) tables. This provides decent performance, but I question how clean of an approach this is.
I recently tried to mark the 953 tables as download only to subscriber (1) allow subscriber changes. This allows the app to work, but it's slow (an hour to generate the snapshot, an hour to initialize the client, longer incremental sync times as it parses all 1000 tables. I also see some errors saying data validation failed rowcount actual: 2778, expected: 2778 and CommitBatchedUpdates Failed with error operand type clash: varbinary is incompatible with real.
I can try to resolve those errors, but I'm curious is there a best practice here that I should be following? I'm more than willing to read doc, but I haven't found anything that addresses the best way to setup your replication for the initial database creation.
Also, is there a way to mark those 953 tables as "include in snapshot" but not to include them in the replication? The employees sync a lot so walking through 950 tables that we know didn't change isn't ideal.
Thanks!
Thomas