Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog

Channel Description:

Discussions on SQL Server Replication

older | 1 | .... | 86 | 87 | (Page 88) | 89 | 90 | .... | 181 | newer

    0 0

    Hi All,

    I am getting the below error in my distribution agent while populating the data to Subscriber database.

    Error executing a batch of commands. Retrying individual commands.

    i checked the error and found out that i was throwing some contraint error so i removed the entire row and still i am getting this problem.I have change tracking enabled on the table so i can't take a new snapshot as the incremental changes might get missed.So i created an linked server and populated the data onto the subscriber but still i am facing the same error.

    Will this approach work i.e populating missing data in subscriber using linked server and then enabling the log reader and distribution agents??

    Kindly help me out. 


    0 0

    I am facing SQL Server Replication issue (Identity Management in a Pull Merge Replication at Subscriber).

    Replication situation:

    Distributor and the Publisher are in one server running Windows Server 2012 Std and SQL Server 2012 Std
    One Subscriber PC running Windows 7 Professional and SQL Server 2012 Express Edition
    Both are connected through the internet using VPN

    The Problem:

    Subscriber has an article (Table) [DocumentItems] where its Identity field [DocumentItemsID] is managed by Replication and was assigned the following range:

    ([DocumentItemsID]>(280649) AND [DocumentItemsID]<=(290649) OR [DocumentItemsID]>(290649) AND DocumentItemsID]<=(300649)

    The server was disconnected from electricity several times. Every time the Subscriber PC is up, The [DocumentItemsID] field will pick an identity out of its range like 330035 when inserting new rows.

    The issue happened 3 times. I fixed the problem by a manual reseed:

        DBCC CHECKIDENT('DocumentItems' , RESEED, xxxx)

    Where xxxx is the MAX existing value for [DocumentItemsID] + 1

    Once the electricity is disconnected again, the same problem occurs.

    Does anybody have any idea what is happening? And why the [DocumentItemsID] field was assigned values out of its range?


    0 0

    Good morning,

    We currently have a large database which we are partitioning,by year, into multiple filegroups.

    From this, we replicate a subset of the main data table (omitting any varbinary(max) columns) to a separate server for reporting purposes.

    The replication only started in the last 9 months, so this is a new situation for us.

    What we need to do is add a new filegroup/file to the main database and also alter the partition scheme and partition function.  That in itself is easy enough and we have no problem with that.  I noticed that when I altered the partition function, a lot of data was moved from one filegroup to another.  I'm not entierly sure that this is by the original DBA's design, or if the developers didn't quite understand what they were doing when designing the app itself.

    My concern is what will happen at the subscriber.

    I am worried that the addition of a new FG will either break replication, or that this will work fine, but all the t-log of data being moved from FG1 to FG2 will need to be replayed on the reporting server. (this is about 1 million rows).

    Because we're only moving a subset of the main table, I am hoping that, should the t-log be replayed, that this wont take too long (it only takes an age on the main database because we're moving imagefiles).

    Does anybody have experience with such a situation and are they able to give me a little insight as to what I can expect to happen?


    0 0

    Hello there,

    We have little situation and are kind of confused about the approach. We have a databases which is transaction replicated across two sites. The publisher and subscriber databases are NOT identical. I don't know how but the subscriber has more data than the publisher.

    Now we want to perform data operation on the subscriber. This is slightly bigger data surgery with a mix of insert and deletes. The thing that confuses us is if we need to stop/pause replication, do data surgery and then enable without re-initializing or re-snapshotting or we can just go ahead and do data surgery without worrying about replication. Since the changes are happening at subscriber side, replication won't cause an issue.

    Any suggestions would be highly welcomed.

    Many thanks


    Please mark posts as answer or helpful when they are.

    0 0

    I have 2 servers for this:

    1. WIndows Server 2008 with SQL SERVER 2012

    2. Windows Web SErver 2008 With IIS 7 - and SQL Client Conectivity Tools installed  i have the next results:

    Class Initialization test:

    ClassStatus ErrorCode
    replisapi.dll classesSUCCESS 0x0
    CLSID_SQLReplErrorsFAILED 0x80040154
    replrec.dll classesFAILED 0x80040154
    msxml6.dll classesSUCCESS 0x0

    Where replrec.dll is supposed to be? On the WebServer or Database Server? And if the answer is on the Webserver, how can i install it?

    Thank you,



    0 0

    Using SQL Server 2008, I used a TSQL script to set up transactional replication, including sp_addarticle. I did not plan to replicate default values, but they replicated anyway. After seeing them on the subscriber, I generated the script for the publication (using SSMS) to check the @schema_option value. It was 0x000000000803108F.  Notice that 0x800 is not set.  So why are default constraints replicating? That's my question.

    As scripted by SSMS, after seeing the defaults show up on the subscriber:

    exec sp_addarticle @publication = N'DBDistribution-GroupCharlie-Tables', @article = N'DistributionContract', @source_owner = N'dbo'
    , @source_object = N'DistributionContract', @type = N'logbased', @description = N'', @creation_script = N'', @pre_creation_cmd = N'truncate'
    , @schema_option = 0x000000000803108F
    , @identityrangemanagementoption = N'none', @destination_table = N'DistributionContract', @destination_owner = N'dbo'
    , @status = 24, @vertical_partition = N'false'
    , @ins_cmd = N'CALL [dbo].[sp_MSins_dboDistributionContract]'
    , @del_cmd = N'CALL [dbo].[sp_MSdel_dboDistributionContract]'
    , @upd_cmd = N'SCALL [dbo].[sp_MSupd_dboDistributionContract]'


    Note:  the table did not exist on the subscriber, so applying the snapshot created it.  This query against the subscriber shows that all the publisher's constraints were created about 10 minutes after the table was created.

    select, t.create_date,, df.create_date
    from sys.default_constraints df
    join sys.tables t on df.parent_object_id = t.object_id
    where = 'distributioncontract';

    0 0

    I have a situation as follows:

    The Server
    SQL Server 2012 Standard Edition installed on Windows Server 2012 Standard Edition
    Active Directory is installed on the same server as well
    Remote Access Role added and configured to connect VPN 
    DNS Role added
    Windows Firewall is disabled
    The Server is connected to the internet 
    SQL Server Service & SQL Browser both are running under domain accounts
    SQL Server allows remote connections

    The Router
    The router that connects the server to the Internet is configured to:
    Enable VPN Tunnels Protocols (PPTP, L2TP and IPSec)
    Forwarding > Virtual Servers (all requests on TCP and UDP on all ports to the server local IP)

    The Client
    PC running Windows 7 SP1 with SQL Server 2012 Express 
    Joined AD on the server
    Connected to the internet
    VPN Connected to the Server
    Can Remote Desktop the Server
    Can ping the server host name
    Can nslookup the server host name

    The Problem
    If Both the Server and the Client are connected in the same Local Area Network, Client can connect to the SQL Serve
    Once the Client is placed in different location connected to the Interent, VPN connected as described above, I could not connect to the Server using:
    Windows Authentication Domain Users or
    SQL Server users

    and the error message is:
    Cannot connect to SERVER\SQLINSTANCE.

    A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)

    For help, click:

    Any thoughts

    Thanks in advance


    0 0


    I have 26 publishers with transactional replication to a central subscriber.

    All the distributor properties and publication properties in these publishers are the same.

    5 of these 26 publishers (publisher/distributor in the same server) have huge distribution log files (up to 100GB) and the log space used is over 90%, even using recovery model simple.

    I already tried to configure the distribution retention and the publication retention, already checked the cleanup jobs and even executed the cleanup procedures myself, but the % space used in the huge logs are still over 90%.

    I can understand a log retention in the publication database because the log reader agent needs to read the log and send the information to the distributor, but I can't understant this retention in the distributor database. As far I know, the information in the distributor database is inside the system tables, the cleanup jobs delete old information in these tables, but why I still have this problem in the distribution log and only 5 of 26 distributors with this problem ?

    How can I shrink the distribution log in this case ? The database is already in simple recovery model, I'm not understanding the log retention.

    Thank you !

    Dennes - Se resolveu, classifique a mensagem, por favor - [] NOVO DVD Segurança no ASP.NET :

    0 0

    Hello to all, Got a replication issue here. I might have found the answer but I'm afraid to run it and want to check with experts first.

    I have transactional replication set up on SQL Server 2012 sp1. 13 publications.Problem: Distribution data file = 19gb Distribution log file = Ballooned to dangerous 700gb

    Setup: 1) Transactional replication setup, continuous 2) 13 publications 3) Largest database = 200 gb 4) Distributor properties:  Transaction retention 0-72 hours  History Retention 48 hours 5) Subscriptions all configured as pull subscriptions 6) Distribution DB recovery model simple 7) Distribution clean up: distribution job run fine in 13 min(kinda long though) 8) Agent history clean up runs fine (2 sec) 9) Expired cleanup job runs fine (17 sec)

    I noticed a few setup inefficiencies and recently made this change: 1) Changed each publications from "never expire" to 120 hour retention

    Manually ran some diagnostics: Segment Name Group Id Size in MB Space Used Available Space Percent Used distribution 1 19629.00 3635.81 15993.19 18.52 distribution_log 0 774523.06 741109.16 33413.91 95.69

    No active open transactions

    Manually ran EXEC dbo.sp_MSdistribution_cleanup @min_distretention = 0,@max_distretention = 72 Removed 0 replicated transactions consisting of 0 statements in 80 seconds (0 rows/sec).

    DBCC Loginfo
    Returns 1104 rows

    Select name, Log_reuse_wait_desc FROM sys.databasess  Name Log_reuse_wait_desc  distribution LOG_Backup  REPLuserdb Replication

    Manually confirmed all data from source to subscriber has been replicated and is a match. The data is all there at the subscriber.

    Considering running the following command I read about in a few blogs: EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time= 0, @reset = 1

    Then running the distribution cleanup job again or shrinking the log. Is there any danger in this,is this the right approach? THANKS!!!

    0 0

    Since we have enabled Merge replication on our tables, some of our SSIS packages have broken.

    The issue is always the same, update statements sent to certain tables through an OLE DB Command are broken.

    The error message is

    "The metadata could not be determined because statement ‘exec @retcode = sys.xp_mapdown_bitmap @mapdownbm, @bm output’ in procedure ‘sp_mapdown_bitmap’ invokes an extended stored procedure.

    Unable to retrieve destination column descriptions from the parameters of the SQL Command."

    Some Googeling brought us to this link, though it is for SQL 2005 and we are using SQL 2012: 


    And indeed, since then we have found out that the issue only occurs on tables whose Merge Replication trigger contain a call to sp_mapdown_bitmap. Different from the article above, we only have the issue in SSIS. The same update statement executed from SSMS works without a problem. We even tried makeing a SP that performs the update, and then calling that SP from SSIS, but no luck. The SP runs perfectly from SSMS but won't validate on SSIS.

    Offcourse, recreating the tables in question would solve the problem for now, but we need a more permanent solution. Columns will be added and removed in the futur, and recreating a table each time that happends isn't realistic.

    0 0

    I have been unsuccessful in getting merge replication running on a new test environment.  I am using SQL Server 2012 SP1 with a small merge publication. I am using an alternate folder and I would like to compress the snapshot but, one of the articles said to try and get it working without compression. It says the process could not read the file;  Login Failure.  I have set the agents to use the sa credentials.  We cannot use domain credentials because we are not using ADC.



    0 0

    Hello DBA world :)

    Kindly, i am working on configuring a MS SQL Replication on SQL Server 2008R2 to replicate data from Source DB( SQL Server 2008R2) on Server 1 to Destination DB (Oracle 11g) on Server 2

    I had configured the replication successfully after installing the 11g oracle client driver and any changes occurs on SQL DB (Source) replicated in less than a sec. to Oracle 11g DB (Destination)

    The point is that the Replication during initialization creates a snapshot of source SQL DB data and re create the destination Schema. which means no historical data will be in destination database, and no schema difference between source and destination Databases. And any schema difference can be happened after the replication has been already initiated.

    Our requirements which need to be maintain are:
    1- Configure the MS SQL Server Replication without deleting or changing any historical data exists in the destination Oracle DB which exists before replication configuration during the replication initiation (Overriding replication initiation snapshot) and historical data can be changed if changes happened on source database after replication starts on the same historical data on source DB.

    2- Start Replication without changing Destination Oracle Schema, and keep destination schema with out changing during replication initiation (Overriding replication initiation snapshot)

    Taking into consideration, that i had tried to edit the ( .SCH , .PRE) files to prevent creation of schema during the replication initiation. however if the schema not identical in source and destination DBs the replication initiation fails through error, (Can not apply Bulk insert during initiation due to schema difference) and we need to keep the destination schema difference as it is.

    Is there any Solution or configuration to fulfill the above 2 requirements during replication initiation and override its own basic rules?

    3 further question if anyone can help:
    1- if there was a data constrains somehow on a certain column in the destination database which not allow certain data value to be inserted in one of the tables, and the source database contains such a data which will be transfered by the replication to destination db, and the constrain prevent the data to be inserted, so the whole replication will fail or that certain record only will fail?

    2- What is the best practice monitoring tool or way to monitor the replication?

    3- Can i script the whole replication solution with any further customization did on the replication configuration and deploy it on different servers automatically?

    Wish if anyone can help please.

    Best Regards

    Shehab Saad

    0 0

    Hi There,

    how can we import/copy the database of sql2008R2 to sql 2008

    0 0

    hello everyone,

    is there any way to add one article in already configured publication in merge replication with out snapshot of entire database like transaction replication.

    note:if i add one article in merge replication,it take snapshot for newly added article only.

    0 0

    Hello All,
    I am trying to setup Pull Transactional replication with publisher and distributer on one server and subscriber on another server.

    I did:
    shared the snapshot folder as \\servername\repldata and gave while configuring the distributer
    run the log reader and snapshot agents under sql agent accounts
    giving the distributer under distributer_admin login and gave that login while configuring the subscriber

    I am getting the error in log readet agent and the msg is "No replicated transactions are available "

    I dont know where it went wrong.

    Please give any suggenstions or ideas, its a kind of urgent and important.

    Thanks and regards

    0 0

    My team has been troubleshooting this for months now.  Sql Server 2008 r2

    Our Main DB & server is on "server A" and is a physical machine with a single virtual instance on it, it does not share with anyone, it has its own physical hard drives and has 20 gb memory and hosts our 1 terabyte DB and is very fast which is perfect.  It performs nightly loads of data which users access throughout the day.  it is not a transactional db, more of a data warehouse db.

    our second sql server, "Server B", has the exact same specs, actually we doubled the memory since it was slow to 40 gb memory and its still 4 times slower than server A!!!  only difference is the hard drive is on a SAN, which I am told is state of the art, fast, etc...

    we setup replication on ServerA to replicate to ServerB.  The distribution DB is on ServerB as well.

    We did a test where we shut down the distribution DB since it seems like that takes up about 30% of the cpu usage, and the queries we ran are still 4 times slower!

    Our next step is we are going to try and put the Distribution DB which is on ServerB on its own volume.

    and then next step is to try and put Server B on its own server (not sharing with any other companies) with its own physical hard drives like server A is.

    Can anyone offer some insight, help, thoughts, etc.  thank you.

    0 0

    I wish to replicate one table into two destination tables.  Like cols 1-30 into table 1, cols 1, 31-50 into table 2.  How can I accomplish this?  Perhaps using views?

    0 0
  • 12/18/07--15:59: Merge replication problem
  • Hi,

    I created a merge publication on 2005 SQL server,

    I deleted this publication and tried to recreated it and got following error.


    An Exception occurred while executing a T-SQL statement or batch



    Invalid object name 'dbo.sysmergepublications'.


    I tried this with a different publication name, it still did not work.


    Please help.





    0 0

    I'm very confused about the status (or future status) of the use of transactional replication. In several places on the MSDN pages I've seennotices that transaction replication is being depreciated and should not be used, but there is no recommendation about what I should replace it with. Yet in other places this type of replication is recommended as the best solution for a given scenario.

    For example the following is suggested for a scenario similar to our situation: ms-help://MS.SQLCC.v10.SQLSVR.v10.en/s10rp_0evalplan/html/4b177aa2-2f76-43d2-8978-6bbd01b10337.htm (<mshelp:link filterstring="(("ProductVers"="kbsqlserv105") OR ("DocSet"="NETFramework"))" keywords="6d676493-6ff9-4d5d-b752-af8b2f73cc6b">Overview
    (Replication)</mshelp:link> > <mshelp:link filterstring="(("ProductVers"="kbsqlserv105") OR ("DocSet"="NETFramework"))" keywords="b6eaef99-471b-4782-ad3e-5db7177f1db1">Replicating Data in a Server to
    Server Environment</mshelp:link> > <mshelp:link filterstring="(("ProductVers"="kbsqlserv105") OR ("DocSet"="NETFramework"))" keywords="d58f6644-398c-4acb-b781-41538eb33bd8">Improving Scalability and
    Availability</mshelp:link> >)

    Currently we are using SQL 2008 R2 STANDARD edition in our data centers but plan to migrate to SQL 2014 late this year (assuming Microsoft releases it sometime soon).

    Our objective is to have 3 datacenters (Europe, N. America, Asia), each under normal operations serving there geographic region, but any one of them able to take over either as part of a load balance or fail-over (HA). We have a large number of web applications containing various mixtures of EF, LINC, and embedded SQL, with frequent use of multiple databases within a given SQL instance.

    Given the above what would be the "best practice" recommendation implementation for our data network configuration?

    Developer Frog Haven Enterprises

    0 0

    Hi Guys,

    i want to know the Server hardware requirements for replicating data of 500 subscriber.(Merge and Transaction Replication)

    Thanks & Regards,

    Kareem Nour 

older | 1 | .... | 86 | 87 | (Page 88) | 89 | 90 | .... | 181 | newer