Quantcast
Channel: SQL Server Replication forum
Viewing all 4054 articles
Browse latest View live

Monitor Distributor Agent in Transnational Replication

$
0
0

Hi,

I wanted to monitor the Distributor Agent's running status in Transnational Replication. I want to get alert when it is stopped.

Tried to create alert for the same. But its not working as expected (does not see any occurrence even when Distribution Agent is stopped).

USE [msdb]
GO
EXEC msdb.dbo.sp_update_alert @name=N'Dist Agent Stopped', 
@message_id=0, 
@severity=0, 
@enabled=1, 
@delay_between_responses=0, 
@include_event_description_in=1, 
@database_name=N'', 
@notification_message=N'Distributor agent is not running', 
@event_description_keyword=N'', 
@performance_condition=N'Replication Agents|Running|Distribution|=|0', 
@wmi_namespace=N'', 
@wmi_query=N'', 
@job_id=N'00000000-0000-0000-0000-000000000000'
GO

EXEC msdb.dbo.sp_update_notification @alert_name=N'Dist Agent Stopped', @operator_name=N'XYZ', @notification_method = 1
GO


Thank you,

Udham Singh


MSmerge_Index_xxxxx missing on subscriber

$
0
0
I have 2 sites that have been replicating successfully for many years using a Merge Replication.  We needed to add several new tables to the replication.  I have added the tables in the Articles of the publication and recreated the snapshot for the publication.  On the subscriber side, I stopped the synchronization and re-started synchronization to read the new snapshot.  Everything appears to be replicating fine but what I have found, only on the subscriber side, the MSmerge_index_xxxxx index is not created on any of the newly added tables.  The publisher database looks fine.  All table have the index added correctly but the subscriber does not have them created.  Is this going to be an issue with my database replication?

Replication issue on xml data in varchar(max) data type

$
0
0

I set up transactional replication and have an issues on xml data which is in  varchar(max) data type. Recently I found some of the xml format got  malformed on subscriber but not on publisher. 

eg: On publisher side

<entry 1>

field1

filed 2

<entry 1>

<entry 2>

<field 1>

<field 2>

<entry 2>

On subscriber it got written as

<entry 1>

<entry 2>

<field 1>

<field2>

<entry 2>

<field1>

<field2>

<entry 1>

It is happening only on few records randomly. I am not seeing any errors on replication side. no latencies. replication performance is excellent. Not sure what I am missing here.

Appreciate your inputs.


S.Prema

SQL Server 2008R2, replicating from on-prem, to SQL Server 2008R2 VM in Azure

$
0
0

I have been tasked with migrating a database, size being up to 15Gb, from a SQL Server 2008 database in a 3rd party cloud VM, to SQL Azure. On go-live day, when we switch from the old host, the Azure, we have a 1 hr downtime allowance.

Unfortunately, the link speed is 1Mbps. So that would take around 30 hours to transfer. What I would like to do is create a Azure VM, with SQL Server 2008R2. Then, using either transaction replication, mirroring or log shipping, allow the database to replicate to my new VM from the source. I have no time limit for this, as the application migration is at my discretion. Could be a day, could be a few months.

My question is - how can I achieve this? I think transaction replication would be best, so that the database is basically up to date all the time. I need to demonstrate that it's possible, so I have created two VMs in Azure (ClientSourceVM and ClientStagingVM) in separate vnets (Trying to simulate across the internet replication).

The initial snap shot would take up to two days to transfer. Would this cause any adverse effect on the source database? Locking?

And is it possible to do replication across the internet? It seems I need to share folders and setup permissions, but not sure how that works with servers in separate domains across the internet. Is it possible?

With SQL Server 2008R2 - is there any better option that can serve my needs?

Note, once the database gets to 2008R2, I'll then hopefully migrate that to it's Azure SQL destination on go-live day. The replication is purely to allow an up to date database be available on the go-live day without waiting nearly 2 days for the backup file to copy.

Any advice and ideas would be greatly appreciated.

Snapshot Agent Times Out At SP_MSACTIVATE_AUTO_SUB

$
0
0

Hello.

I have a Transactional Replication with a remote Distributor, on a DB that I was previously publishing, then removed the PUB and SUB (because I switched to a Remote Distributor), and now am trying to set the DB for Replication up again.

For the past 10 hours the Snapshot Agent re-tried several times, and it has always gotten stuck at the same point (82% completion):

Execution Timeout Expired....Command Text: sp_Msactivate_auto_sub (with Params @publication, @article = % and @status = active)

I have now put it to 3600 seconds before it times out, but regardless if this works of not, I would like to kindly ask the following:

  1. Any clue why exactly it has been failing at this Stored Procedure, and what does this SP do?
  2. If I end up having to drop the whole Publication and start anew, I am thinking of adding a new SP to my "Clean Up a Publication" set of SPs, and it's sp_cleanupdbreplication, so I wanted also to kindly ask what does it exactly do and should I fire it up on both PUB and SUB (and Distributor), or only PUB Server?

(I did read a bit about it, but from what I've skimmed, the documentation is a bit obscure, so I thought I ask here for a clear answer)

With thanks and kind regards,

Bogdan

querying a database that has replication subscriber tables

$
0
0

Hi we run 2017 standard. 

we are wondering if queries run against replication subscriber tables can interfere (ie block) with the replication process.   and if the opposite is true as well, ie the replication process can block the queries.

and can read committed snapshot isolation be turned on a subscribing (replication) database?...theoretically to avoid any locking that the query (or replication) might create.

we don't really want to query with nolock.

what if i want replication but dont have a pk

$
0
0

Hi I have an incident history table that doesn't have a primary key. It is provided by a 3rd party so I don't have much of a choice there.

it has a clustered index on the incident id (which is unique on the table where the data originates) and a modified datetime.  But without even looking I suspect that the incident id and mod datetime don't necessarily have to be unique.

The originating table is the source for a subscribed replication table.  The ultimate would be for the subscribing db to be the recipient of something like a replicated history table as well.

do I have any options?  I think one option is to set up a separate db (I heard log shipping cant go to a db that is enlisted as a replication subscriber) where log shipping would take place but I've heard log shipping is voluminous.  and if the source of log shipping is the same tran log i'm used to seeing, i suspect the source is recovery full and therefore going to be much more voluminous than a recovery simple db source. 

Logical Records in Merge Replication

$
0
0

Good Day All,

We're currently running SQL Server 2008 R2 in a merge replication topology.  Publisher and distributor are on same machine.

I've done some reading on the use of Logical Records to process merge changes as a unit across related tables.  However, I've discovered three key points (from BOL):

1) Use of Logical Records has been deprecated and not recommended for use in future development work,
2) Child tables can only have one parent table, and
3) Custom conflict resolution with BLH or custom resolvers is not supported for articles forming a Logical Record.

The concept of processing merge changes across related tables as a single transactional unit is ideal based on our changing business requirements.  To describe our environment, we have several shared tables in use in our replication scheme.  One would be similar to:

CREATE TABLE [dbo].[Test]
    (
    [TestID] [varchar](50) NOT NULL,
    [DocumentTypeID] [varchar](50) NULL,
    [DocumentID] [varchar](50) NULL,
    [TestDate] [datetime] NULL,
    [Remarks] [varchar](1048) NULL,
    [UserID] [varchar](50) NULL,
    CONSTRAINT
        [PK_Test_TestID] PRIMARY KEY CLUSTERED ([TestID] ASC)
    WITH
        (
        PAD_INDEX = OFF,
        STATISTICS_NORECOMPUTE = OFF,
        IGNORE_DUP_KEY = OFF,
        ALLOW_ROW_LOCKS = ON,
        ALLOW_PAGE_LOCKS = ON
        )
    ON [PRIMARY]
    )
ON [PRIMARY]
GO
In our merge publication, [dbo].[Test] is configured as a joined table (unique key) to multiple filtered tables via the [DocumentID] column ([DocumentID] is unique across all filtered tables).  We'd much prefer to continue to keep the shared tables scheme we have in use.  However, we'd also like to pursue the logical records avenue.  If the feature is being deprecated, what are our options to achieve the same end goal?  Would there be a way to process merge changes as a unit using Business Logic Handlers?

Any assistance in this regard is greatly appreciated!

Best Regards
Brad

Due to Huge volume of data updates on Source System SQL Server Transactional Replication is very slow ?

$
0
0

Hi,

Due to huge volume of data getting updated on Source System i.e. Oracle Database , SQL Server Transactional Replication is very slow. Currently we are receiving 100K data updates every minute.

Request you to suggest some other of method of replicating Data from Oracle database to SQL Server database. 

Business requirement expects data replication should be near real time. 

Thanks

Neeraj Dubey

Change Tracking Performance

$
0
0

We have a large production system and implemented change tracking 8 days ago with a retention policy of 3 days.  The syscommittab is currently at 332 million records and climbing.  We have SQL 2008 SP1 CU 7 installed (10.0.2766.0).  We temporarily changed the retention policy to 10 days but scaled it back again yesterday becuase the problem with our synchronization process was solved.  We have seen no noticable change in the size of the syscommittab.  How can someone reliable tell if the auto clean-up is working?  (it is on)

More importantly we are seeing a very severe performance degradation on production when querying the change tracking tables.  The following query a week ago ran in seconds, now it takes 15-20 minutes and climbing rapidly.  The table that is being queryed has only 12 rows of data in the internal change tracking table which represents it.  It is a very static table. 

This Query takes 15-20 minutes to run on production:

SELECT

 

*FROMCHANGETABLE(CHANGES dbo.[table1], 146316152)AS cLEFTOUTERJOIN dbo.[table2]AS a

WITH

 

(NOLOCK)ON c.[CID]= a.[CID]AND c.[DID]= a.[DID]WHERE

(

 

c.SYS_CHANGE_OPERATION='D'OR(a.[CID]ISNOTNULLAND a.[DID]ISNOTNULL));

This Query takes only 43 seconds:

SELECT

 

*FROMCHANGETABLE(CHANGES dbo.[table1],null)AS cLEFTOUTERJOIN dbo.[table2]AS a

WITH

 

(NOLOCK)ON c.[CID]= a.[CID]AND c.[DID]= a.[DID]WHERE

(

 

c.SYS_CHANGE_OPERATION='D'OR(a.[CID]ISNOTNULLAND a.[DID]ISNOTNULL))AND SYS_CHANGE_VERSION> 146316152;

The difference in the execution plan is the first one does an index seek on syscommittab using some scaler operator in the predicate where the second FASTER query uses an index scan against syscommittab.  We are contemplating going to the second syntax to perform our data extracts but are unsure where the two queries return the same result set and if not what is the difference?

Any help would be appreciated, this is quickly becoming a crisis.

Replication doesnt copy check and default constraints

$
0
0

Hi,

i'm trying to create a transactional replication from DB A to DB B. Definition of A and B are the same, and i want to keep the B database with all the indexes, constraints, keys, ...

But after the snapshot is done, some of the constraints disappear. In Article properties, i set the "Copy default value specifications" and "Copy check constraints" to true. However, check constraints are removed from subscription DB and also some of the default constraints. I noticed, replication keep only constraints with constant default value (for example 0). If the value is function, its gone. The function exists in both databases, so i think, there is no problem with keep this constraints in subscription database.

I'm using latest SQL Server 2016 (developer edition).

Is this a bug, or a normal behavior?

Transactional Replication in SQL 2014

$
0
0

Recently we configured Transactional Replicaiton in SQL Server 2014 ENT.

Both publisher and subscriber are on SQL Server 2014 ENT.

it was working fine. But now getting msg like ,

Replicated transactions are waiting for next log backup or for mirroing partner to catch up

Just verified that log backups are heppening successfully.

can merge replication target a db already targeted by transaction replication

$
0
0

Hi we run 2017 standard. We are absent pk's on some of our tables that we want replicated to the same db where we already tran replicate many tables that do have pk's.  

can merge replication target a db already targeted by transaction replication?

does merge replication physically add the row guid to the publisher side?

$
0
0
...if yes, isnt this a problem if the publisher side is a 3rd party product that perhaps has select *'s in it?  we run 2017 std.

can a db that is already the target of trans replication also be the target of custom made etl?

$
0
0

hi we run 2017 std.

Due to a lack of pk's on some publisher tables, we may need to roll our own "replication" to a target db via etl.

the best scenario would be for us to put those tables on db's whose tables with pk's already subscribe to sql server replication.  Is that doable?  Is it possible that best practice says not to do this? 


Log Shipping Copy Job failed and tried to copy the transaction log which are 3 days old.

$
0
0

HI All , 

We have the Log shipping configured between the 2 instances and sometimes we get the issue or alert for copy job failed.

after check the error logs for the Job , we found that the Job is trying to take the copy the transaction logs which is 3 days old.

We have retention period of the logs for 3 days and after that the logs gets deleted and we have 15 min of frequency.

Today is 22Aug2019 and the error log shows below -:

"2019-08-22 14:00:02.47*** Error: Could not find file '\\HYSMACFLR002\GBREVE01_SQLBackup_01$\Logshipping\EVVSEXCVS01_2\EVVSEXCVS01_2_20190820124500.trn'.(mscorlib) ***"

It clearly shows that the it is trying to copy the trn for 20 aug.

Please help


Distribution Agent Fails - Row Not Found at Subscriber

$
0
0

Hello.

I started getting the following issue on my Transactional Replication with a remote Distributor:

  • Distribution Agent fails with the message
    The row was not found at the Subscriber when applying the replicated DELETE command for Table '[dbo].[Active Session]' with Primary Key(s): [Server Instance ID] = 56, [Session ID] = 13 (Source: MSSQLServer, Error number: 20598)

I took the following action:

  • Went to the Publisher to check the dbo.Active Session Table - was empty (as I guess it should be since there was a DELETE statement applied)
  • Went to the Subscriber to check the dbo.Active Session Table - contained a row with the [Server Instance ID] = 56 and [Session ID] = 13

I am wondering why the Replication says it cannot find the row when it is there?

Also, is there a way around this that does not involve re-creating the Publication and Subscription - in other words, is there an easy way around this?

With thanks and kind regards,

Bogdan

Logical Records in Merge Replication

$
0
0

Good Day All,

We're currently running SQL Server 2008 R2 in a merge replication topology.  Publisher and distributor are on same machine.

I've done some reading on the use of Logical Records to process merge changes as a unit across related tables.  However, I've discovered three key points (from BOL):

1) Use of Logical Records has been deprecated and not recommended for use in future development work,
2) Child tables can only have one parent table, and
3) Custom conflict resolution with BLH or custom resolvers is not supported for articles forming a Logical Record.

The concept of processing merge changes across related tables as a single transactional unit is ideal based on our changing business requirements.  To describe our environment, we have several shared tables in use in our replication scheme.  One would be similar to:

CREATE TABLE [dbo].[Test]
    (
    [TestID] [varchar](50) NOT NULL,
    [DocumentTypeID] [varchar](50) NULL,
    [DocumentID] [varchar](50) NULL,
    [TestDate] [datetime] NULL,
    [Remarks] [varchar](1048) NULL,
    [UserID] [varchar](50) NULL,
    CONSTRAINT
        [PK_Test_TestID] PRIMARY KEY CLUSTERED ([TestID] ASC)
    WITH
        (
        PAD_INDEX = OFF,
        STATISTICS_NORECOMPUTE = OFF,
        IGNORE_DUP_KEY = OFF,
        ALLOW_ROW_LOCKS = ON,
        ALLOW_PAGE_LOCKS = ON
        )
    ON [PRIMARY]
    )
ON [PRIMARY]
GO
In our merge publication, [dbo].[Test] is configured as a joined table (unique key) to multiple filtered tables via the [DocumentID] column ([DocumentID] is unique across all filtered tables).  We'd much prefer to continue to keep the shared tables scheme we have in use.  However, we'd also like to pursue the logical records avenue.  If the feature is being deprecated, what are our options to achieve the same end goal?  Would there be a way to process merge changes as a unit using Business Logic Handlers?

Any assistance in this regard is greatly appreciated!

Best Regards
Brad

Replication: There is this merge replication which often shows genstatus with value 4

$
0
0
Replication: There is this merge replication which often shows genstatus with value 4. How to address it and minimize reoccurrence ?

SQL 2019 CTP 3.2 sp_adddistributor fail / bug

$
0
0
Unable to use TSQL or Mgmt Studio to configure instance of SQL 2019 CTP 3.2 as distributor. Both return error indicating "The server principal 'distributor_admin' already exists.". To reproduce, just open query window and run "sp_adddistributor @distributor = @@servername".
Viewing all 4054 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>