Quantcast
Channel: SQL Server Replication forum
Viewing all 4054 articles
Browse latest View live

Database Mirroring Monitor Job

$
0
0
I believe that Database Mirroring Monitor Job was created when we were experimenting with mirroring. Is it safe to drop the job? Will getting rid of the job affect our (snapshot) replication?

Moving REPLDATA folder cause reinitialization?

$
0
0
Does moving the repldata folder location require reinitializing replication?

John M. Couch

Transactional replication - merge command performance

$
0
0

Hello,

I'm trying to find a solution to one performance issue with replication that I'm facing at the moment, so I'm hoping someone here could help.

I have one publisher database with 3 subscribers, and occasionally I need to update the publisher database with relatively large number of rows.

For that purpose I'm using merge operation, having separate database with new data as source and publisher database as target. Result of merge operation shows that there are 3 "problematic" tables that generate total of around 300 millions of rows to be distributed.

Publisher to Distributor History shows that data is transferred to distributor in acceptable time manner, but Distributor to Subscriber is taking very long. I traced the process on subscriber using the Profiler and saw that each row is being updated/inserted/deleted separately, and I believe that this causes the performance issue.

I was wandering if there is a way to do the merge operation on subscriber in batches, and not row by row?

I did the reinitialization on one subscriber and it worked fine and relatively quick, but it is not the option since I'm expecting increase of number of subscribers in future, and I need the process to be automatic.

Any help would be highly appreciated! :)

BR,

Miljan

Log Shipping question...Can I recreate a log backup to "catchup" my Log Shipping Restore...?

$
0
0

First off, I hope I'm posting to the correct forum as I didn't see a "Log Shipping" forum, and I have a Log Shipping question.

I accidentally deleted several transaction log backups before they could be copied and restored to the secondary server.  Now, when the restore job runs, it errors out saying it needs an earlier backup based on an LSN that it provides (pretty cool).  My question is simply, can I recreate the log backup somehow, based on the LSN?  I have a query that tells me the timestamp of the backup.  I just need to know if I can recreate it.  I'm hoping once I can recreate the log backup, the log shipping configuration will "catch up".  Its been running fine for over 5 months.

Thx much Folks

Roz

P2P Replication and Update Stats

$
0
0

I had P2P Replication configured between 2 servers. The replication from Server 1 to Server 2 was working fine but the Replication from Server 2 to Server 1 was broken. I removed Replication completely and temporarily configured Log shipping between Server 1 and Server 2 as Server 2 was not yet active. After removing P2P Replication, the updatestats on the db which was part of P2P Replication started failing. On researching I found a KB Article:KB2498796 which states that this issue is fixed in SQL 2008 SP2 cu3. (The version of my server is SQL 2008 SP2). I upgrade my server to SQL 2008 SP3 cu1 and notice that the updatestats continue to fail. On further research, I found that the KB Article  for SQL 2008 SP2 cu3 does not list KB2498796 in the bug fix list. I could not find any cu or SP After SP2 cu3 as well which would fix this issue. Only option so far I have identified is clone the table and drop existing table. My table sizes are really huge and also the number of tables along with the number of indexes, constraints, triggers and dependencies on each of the tables It does not seem worth to clone every table to fix this issue. Can anyone suggest any other alternative to fix this issue.??

The error that occurs when I run update stats is :

Msg 0, Level 11, State 0, Line 0

A severe error occurred on the current command.  The results, if any, should be discarded.

Msg 0, Level 20, State 0, Line 0

A severe error occurred on the current command.  The results, if any, should be discarded.

Merge replication loops when replicating records are more than set limit

$
0
0

Hi

We have merge replication setup. The default amount of records it will try to replicate is 10 000. If I update 11 000 records on the publisher, it replicates 10 902 and then 10 203 and then 10 233 and so on forever. It seems like its not getting done and tried to do it over and over. When I set the record limit to 100 000, then the 11 000 updated records go through with no problem and it only does it once.

Has anyone come across this ? Why is it doing this?

I was able to recreate it by lowering the limit again.


Jack of all trades, mastered by my wife.

peer to peer replication

$
0
0

Hi All,

I have few doubts on peer to peer replication, actually i am new to p2p replication.

can u please explain the work flow of p2p replication and what are the jobs created in p2p .


Thanks in advance for ur's valuable answers.


RAM

SQL Server 2008 Replication with Large Sets of Data

$
0
0

Hello,

I have a few questions regarding SQL replication and how it handles large sets of data.   We are exploring different options to offload reporting from the OLTP system.  The database in question is almost 1TB in size.  The tables that are used for reporting have millions and millions of records.  Some have billions.  We currently have a mirror setup and do a snapshot and then report off the snapshot.  The business now requires almost "real-time" data for their reports instead of hourly snapshots.  It's a daunting process for the server to keep generating a snapshot and the application has many errors from it because the server is constantly generating the snapshot with the latest data.  Long story short the requirements have changed so we are looking into transactional replication for our reporting solution.

We'd like to keep the mirror in place and add in another server(s) for reporting.  We are thinking two more SQL servers.  One subscriber and one distributor.  The existing servers would be the principal and mirror.   From what I've been reading it's best to have the distributor database on a dedicated server.  

  1. What would be the space requirements for the distribution database?  
  2. Are there any potential issues with replicating such large sets of data?

Thank you for your time.


Subscriber is not getting created in SQL Replication

$
0
0

Hi

I am using SQL Server 2008 R2 for using the feature of SQL Replication, i have two different servers where I am using the first server as the publisher and the second one as the subscriber. 

I am able to create the publisher, subscribers and distributors successfully on these two machines. The publisher wizards creates the publisher but the subscription wizard says subscriber created successfully but i can't see that in the "Local Subscriptions" folder in the object explorer. Also I couldn't find any exceptions any where during this creation.

Can somebody help me whats is that I am missing here? 

Thanks,

DDL Changes to Article (ALTER TABLE ADD COLUMN) not propagated to subscriber table, article converted to column filtering in SQL Server 2008 (Transactional Replication).

$
0
0

I added many columns to a table in replication via "ALTER TABLE ADD", all of datatypes INT, TINYINT, DECIMAL and DATETIME and all NULL.  The DDL changes were NOT replicated to the subscriber and the article had been converted to column filtering. My columns are now filtered!

 When I check in the replication properties, the check boxes for are not checked for my new columns. 

This is SQL Server 2008 SP1 CU6 (10.0.4321).

I created another similar publication on our test environment, added the new columns and the DDL change was instantly created and applied to the subscriber. (10.0.2531), so this is not consistent.  In fact, this is the first time this has happened except in our SQL Server 2005 envirionment.

I tried executing the sp_articlecolumn to add all columns, but got this error:

Msg 20608, Level 16, State 1, Procedure sp_MSreinit_article, Line 190

Cannot make the change because there are active subscriptions. Set @force_reinit_subscription to 1 to force the change and reinitialize the active subscriptions.

If I script out the publication, the article is added column by column:

-- Adding the article's partition column(s)

exec

sp_articlecolumn@publication =N'Funtime', @article =N'Mytable', @column =N'FunID', @operation=N'add', @force_invalidate_snapshot= 1, @force_reinit_subscription = 1

SO now I have to add all my columns one by one and it looks like will have to snapshot also.

Anyone know what makes this happen and how to prevent it?

Thanks!


David z

Push subscription across a one way trust to a different domain

$
0
0

Hey all,

I have a sql 2008 R2 instance that I'm attempting to setup replication of several tables to a SQL 2012 server in a different domain .  I need to use a push subscription because the domains are connected using a one way trust.  I've configured the publishing and distribution properties on my 2008 instance and also created a local publication.  I next tried to create a local subscription (push) to the outside machine.  When I open the subscribers dialogue I see my current server.  If I click add a subscriber and input the fully qualified server name and credentials I get the error message

SQL Server replication requires the actual server name to make a connection to the server. Connections through a server alias, IP address, or any other alternate name are not supported. Specify the actual server name, 'WAREHOUSE'. (Replication.Utilities)

Not sure what the problem is.  It seems I can't get the new subscriber dialogue to see the outside machine.  I'm able to add the server as a linked server and access the data but it seems to fail as the endpoint of a push subscription.

This is my first foray into replication across domains so any help would be greatly appreciated.

Thanks in advance.

Add article to Merge replication without having to set @force_invalidate_snapshot = 1

$
0
0

I am adding 4 articles to a Merge publication. Each Article has the @subset_filterclause set with the same SELECT statement. When I try to run the code, SQL Server is informing me that:

"Could not add article 'dummy_name' because a snapshot is already generated. Set @force_invalidate_snapshot to 1 to force this and invalidate the existing snapshot."

I do not want to create a full snapshot of the publication. Is there a way around?

Thank you.

Transactional Replication Filter SQL Server 2008 R2

$
0
0

 Hi , I have a requirement to publish a set of records in a table. The data is something like 

select  *  from  
Table_Monit a 
inner  join  Acct b  on  a.Account_id =  b.Account_id
where  b.attID=  1090

In the filters it only allows me to put where clause. Can't we include Joins in the filter? Can someone help me on this?

Thanks!

regarding replication logreader agent?

$
0
0

Hi All,

I have a doubt on the log reader agent in replication, if the log reader agent is stopped or not running. is it effect on the log file? means the log file is growing...! or any other problems will occurs.

Thanks,


rup

Database mirroring Error

$
0
0

I am getting below error when I am trying set partner to principal server

Error: 1474, Severity: 16, State: 1.

Database mirroring connection error 2 'Connection attempt failed with error: '10061(No connection could be made because the target machine actively refused it.)'.' for 'TCP://DRSQL2008R2.test.local:5023'.



Transactional Replication in 2012

$
0
0
I've used transactional replication many times w/ 2008r2.  I am attempting to create a publication on a sql 2012 server, I get the following error "Current transaction can't be committed and can't support operations that write to the log file, roll back the transaction.  Changed db context to 'db_name' Error 50000.  I'm unclear why I'm getting this message.  Any ideas on how to 'fix the issue'?

Invalid object name 'MSmerge_cpmv_0EAF7516BA284880950D495D478AC1EE'

$
0
0

Am receiving an error. 

Invalid object name 'MSmerge_cpmv_0EAF7516BA284880950D495D478AC1EE'

Any idea how to troubleshoot them apart from re-init..??


- Kerobin


replication error in SQL SERVER 2008

$
0
0

Hi,

we got the following error

The distribution agent failed to create temporary files in 'C:\Program Files\Microsoft SQL Server\100\COM' directory. System returned errorcode 5. (Source: MSSQL_REPL, Error number: MSSQL_REPL21100)
Get help: http://help/MSSQL_REPL21100

by looking at http://support.microsoft.com/default.aspx/kb/956032, which suggest the issue is related to "Distribution Profile for OLEDB streaming" profile, but in our case, we use default profile for distribution agent.

after we reinitailized subscription, the issue was solved, but it came back after a while.

any input will be appreciated

our environment: windows server 2008 enterprise sp1 sql server 2008 64bit enterpise sp1

Start/Stop/Disable/Enable replication subscriber sync from publication server

$
0
0

Is it possible to Start/Stop/Disable/Enable replication subscriber sync from publication server on a PULL subscription merge replication.?


- Kerobin


SQL 2008 R2 RTM CDC Cleanup Process Question....

$
0
0
I recently inherited a database that has CDC enabled. There is a custom cleanup process that was written to only remove the records where the operation = 1 or 2, since it appears they wanted to only keep the before and after update statements for reporting.

Here is one of the many problems I face with this....
If I go ahead and turn on the cleanup agent (sys.sp_MScdc_cleanup_job) with a 10 year retention, it removes records from
within this timespan. This table starts out with 5 million records dating back to Nov 2011, then this cleanup job removed 2 million of them which included Dec 2011 and various times for 2012.

I did find out that the database was placed into simple mode on various occasions to shrink the transaction log since there was no DBA in house at the time.

Could this be a matter of something being out of sorts within the database when it tries to go back and determine the high and low watermark for the records it thinks it needs to remove? I guess I don't have a clear understanding of how it determines what records to purge from these tables, if I have a 10 year retention set, I would have assumed that the records would have remained untouched. But I know what happens when you assume something...
Viewing all 4054 articles
Browse latest View live




Latest Images