Quantcast
Channel: SQL Server Replication forum
Viewing all 4054 articles
Browse latest View live

[MS-SQL/SQLite] Sync MS-SQL Server DB with SQLite DB

$
0
0

Hello Everybody!

I need to know if the following situation is possible:

Client devices:

  • WinRT 8.1 Surface tablet
  • Windows 8.1 Phone smartphone
  • Windows 8.1 Desktop/Desktop

On embedded devices I can't use MS-SQL Server so I use a SQLite for my client devices as database engine.

Server:

  • hosting a webservice.
  • Running MS-SQL Server.

Workflow:

When one of the client devices made changes to the local (SQLite) database it notifies the server through the webservice. On the webserver are the client databases are linked to the MS-SQL Server with the ODBC driver (see link below) so the MS-SQL Server can communicate with the clients database and merge the records from the client to the  main server database.

Question:

Is this situation possible?

Linking SQLite with MS-SQL:

http://community.spiceworks.com/how_to/show/2271-create-ms-sql-linked-server-to-the-spiceworks-sqlite-serverThanks in advanced!


Mirroring and Replication for HA/DR Setup

$
0
0
We are looking at using Mirroring to replace the DR solution set up previously in our Hyper-V environment now that we found Replication is not supported using hyper-v replica which is what was being done. The situation we have, AppDBServer is our application database server, we replicate to RepReportingServer using transactional replication to report off of this server rather than hit the AppDBServer. We only replicate a subset of articles for performance reasons and also have many additional reporting tables in the replicated database so we need to maintain that structure so we do not have to rewrite/touch reports. I understand we can use mirroring for HA/DR of our AppDBServer and set up the replication to handle failover using the PublicationFailoverPartner which would give us 3 servers in the set up, AppDBServer, AppDBServerMirror and RepReportingServer, but how do I "mirror" that RepReportingServer? That seems to be  a problem for me here. I need this reporting server in some kind of DR failover environment also.

Unable to configure Oracle Publisher in SQL SERVER 2008 R2

$
0
0

Hi, All,

I am trying to talk to my Oracle database from SQL server. I already had Oracle Client 11g installed in my SQL server. 

Following the guidance in the technet on configuring the Distributer successfully. I now trying to configure the Publisher, by choosing "Adding Oracle Publisher" option. Then it prompt for the user id & password, I have enter the login (which is replication user id created in Oracle). I encounter following error msg:

TITLE: Distributor Properties
------------------------------

Oracle server instance 'ssluat' cannot be enabled as a Publisher because of the following error:

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600.1&EvtSrc=Microsoft.SqlServer.Management.UI.ConfigureWizardErrorSR&EvtID=OraclePublisherValidateFailed&LinkId=20476

Quote:

------------------------------
ADDITIONAL INFORMATION:

Unable to run SQL*PLUS. Make certain that a current version of the Oracle client code is installed at the distributor. For addition information, see SQL Server Error 21617 in Troubleshooting Oracle Publishers in SQL Server Books Online. (Microsoft SQL Server, Error: 21617)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600&EvtSrc=MSSQLServer&EvtID=21617&LinkId=20476

------------------------------

Unquote


In fact  I can run SQL*PLus from my c:\drive command prompt and the Path in Envinronment variable has set to "c:\app\product\11.2.0\client_1\bin". So I don't know why thing msg prompt?

I have search the guideline on this error in Technet and do what I can to troubleshoot but still got error.

Kindly pls help me.

Thank you.

Avelyn



An error occurred applying the changes to the Distributor?

$
0
0

I'm trying to create a new Oracle publisher but I always get the following error. I already dropped the old publisher using exec sp_dropdistpublisher @publisher='old'. (However, the dropping get errors now (http://social.msdn.microsoft.com/Forums/en-US/home?forum=sqlreplication)).

TITLE: Distributor Properties

An error occurred applying the changes to the Distributor.


ADDITIONAL INFORMATION:

SQL Server could not enable 'newpub' as a Publisher. (Microsoft.SqlServer.ConnectionInfo)


An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)


The Oracle server [newpub] is already defined as the Publisher [old] on the Distributor [MyServer].[distribution]. Drop the Publisher or drop the public synonym [MSSQLSERVERDISTRIBUTOR]. Changed database context to 'master'. (Microsoft SQL Server, Error: 21646)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&ProdVer=10.00.4000&EvtSrc=MSSQLServer&EvtID=21646&LinkId=20476

Determining datetime when most recent command was replicated

$
0
0

Given a particular publication and subscriber, can someone point me to how I can determine the datetime when the latest command was replicated to that subscriber for that publication?

Specifics:

     I'm using transactional replication on SQL Server 2012.

     The publishing server is also the distributor.

Selectively deleting rows from MSMERGE_GENHISTORY!

$
0
0

I know this is a big No-No. But I think I don't have a better option at this point. We have SQL2012 publisher and 230 SQL 2008 EXPRESS subscribers. The retention for data is 3 days. And, publisher is set for @partition_options = 3; that means the subscription data is non-overlapping (unique). 

For some reason, we see a large amount of rows in MSMERGE_GENHISTORY table. Although, all the subscribers are in synch. In MSMERGE_GENHISTORY  a couple of rows show genstatus 1 (means closed), and 99% of the rows have genstatus 2 (closed, also means data originated at a different subscriber.)

What can I do to bring down the record counts in this table? Can I purge the data in this table, since the records are in synch? has anyone ever purged the data from this table?

I tried exec sp_mergemetadataretentioncleanup, but didn't find any improvement. 

-- PUBLISHER 

Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64) 
Oct 19 2012 13:38:57 
Copyright (c) Microsoft Corporation
Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)

-- SUBSCRIBER 

Microsoft SQL Server 2008 (SP2) - 10.0.4064.0 (Intel X86) 
Feb 25 2011 14:22:23 
Copyright (c) 1988-2008 Microsoft Corporation
Express Edition with Advanced Services on Windows NT 5.2 <X86> (Build 3790: Service Pack 2)

Adding an index to a table at the subscriber end

$
0
0

Hi,

We are currently using SQL server 2000 with transactional replication and I want to add an index to a table at the subscriber end to improve the performance of the reporting we do but I'm wary of this in case it breaks replication.

Can anyone advise if this can be done with out causing any issues.

Thanks

Mike

ERROR - 0x8004563e: The publication 'PubMergeTest' does not allow web synchronization.

$
0
0

Hi All,

While doing web synchronization in merge replication, I am getting the below mentioned error: 

CReplicationListenerWorker    , 2014/04/29 08:45:01.570, 8740,   174,  S2, INFO: =============== START PROCESSING REQUEST ==============
CHttpListener                 , 2014/04/29 08:45:01.571, 8740,   258,  S2, INFO: Exchange ID = DB5A013B-8EAC-4782-9902-A01E67CD5818.
CReplicationListenerWorker    , 2014/04/29 08:45:01.573, 8740,   298,  S2, INFO: Processed request type: MESSAGE_TYPE_SyncContentsUpload.
DatabaseReconciler            , 2014/04/29 08:45:01.818, 8740, 13175,  S2, INFO: Reading profile: AgentID:1, AgentType:4, ProfileName:
DatabaseReconciler            , 2014/04/29 08:45:01.887, 8740, 25122,  S2, INFO: [WEBSYNC_PROTOCOL] Received client ReconcilerPhase WebSyncReconcilerPhase_ReinitSchemaAndFiles
replrec!FillErrorInfo         , 2014/04/29 08:45:01.887, 8740, 20097,  S1,ERROR: ErrNo = 0x8004563e, ErrSrc = <null>, ErrType = 9, ErrStr = The publication 'PubMergeTest' does not allow web synchronization.
DatabaseReconciler            , 2014/04/29 08:45:01.895, 8740, 20210,  S2, :T:,110,113,32778,,,,,,,
DatabaseReconciler            , 2014/04/29 08:45:01.895, 8740, 20217,  S2, INFO: Session Highlights: REINIT, FAIL, WEBSYNC_SERVER, 
CReplicationListenerWorker    , 2014/04/29 08:45:01.896, 8740,   321,  S1, ERROR: Failure in reconcile, hr = 0x8004563e.
CReplicationListenerWorker    , 2014/04/29 08:45:01.897, 8740,   396,  S2, INFO: =============== DONE PROCESSING REQUEST ===============

Tried to search on internet, but i couldnt find any solution for this issue.

Please help if you have faced this issue ever.

Thanks.....


replication

$
0
0

can the replication works between 2 servers which are in different domains...

what are the precautions need to be taken care to have succesfull replication.

Please anyone can answer this..

replication error skipping file because it has already been delivered

$
0
0

I've set up Replication, published the database, set up the subscriber (on a seperate server, within the same domain)

I'm getting an error message stating. 

"replication error skipping file because it has already been delivered"

how do i resolve this ?

Sql Server Replication Advice - In desperate need of good advice

$
0
0

   I am a new developer and have written an application which solves some very important problems for my company. At the moment, I am stuck on replication. I believe what I need is merge replication but before I waste any more valuable time I wanted to ask for advice. The specific problem I am trying to solve is this. The company has around thirty marine vessels which frequently lose internet connection. The application I have written writes to a sqlserver 2012 database installed locally on each marine vessel. When the vessels have connection, it also writes to a sqlserver 2012 database which currently resides on an azure vm. What I need to happen is this. I need the databases on the marine vessels to replicate with the remote database. I believe that I need the local databases on the marine vessels to be the publisher and distributor and that I want the changes to be pushed to the remote server residing on the azure vm. The setup has been so hard to implement (probably because I'm still dealing with the learning curve) that I am wondering if this is the best way to go. Any advice on whether this is the correct way to go or if there are better alternatives is greatly appreciated.

  I have also tried using the sync framework, and the sql azure sync agent. I have gotten both to achieve exactly what I am after, except for one thing. The application residing on the marine vessels will create new tables. I have not found anyway to add the new tables to the sync schema, without manually going in and setting them up. Since I will not be on these vessels, it's not really an option. If there is a way to add the new tables programmatically then the sync agent would work perfectly for me.

 Thanks to all for any help.

Merge Conflict Resolution without using SSMS

$
0
0

So, 

We have a scenario that requires a level of "conflict resolution" that is not readily explained in all the Replication Documentation on MSDN (or at the very least is not easily found).

So far I have found the EnumMergeConflictCounts() and EnumConflictTables() on the ReplicationDatabase class in the SMO/RMO libraries.  (I know this I primarily RMO but it is dependent on SMO).  These are great in telling me what articles had conflicts, and an initial entry point to discovering what the conflicts are should I wish to proceed further.  However, this seems pretty much only that amount of information (no more) and if I want more I have to go to the specific MSMerge_conflict* tables for the indicated articles to get what the conflicts where.  (I maybe wrong, so please feel free to correct).

The problem we have is such:

Much of our system is based on Audit tracking, and thus we utilize 1-1 style Junction Tables.  To elucidate further:
Cabinets, Televisions, Cabinet_Television.  Televisions is only the list of the types of Televisions, and Cabinets is the list of the physical Cabinets.  Cabinet_Television retains a link to both tables with an Installed and Removed date columns respectively.  The Cabinet_Television record that is Installed but with a REmoved of NULL is the current Television in the cabinet.  If they install a different Television, the current Cabinet_Television record has it's "Removed" column assigned and a New Cabinet_Television record is created pointing with a REmoved of NULL. 

Generally straight forward system (and we have this scenario in Multiple places throughout our database schema). 

However, as you can guess, if two "subscriber" systems, which are not currently connected to the Publisher's network, will be able to "install" the same Television in the same Cabinet. (or perhaps different Televisions in the same Cabinet) and on their independent database they will generate unique Cabinet_Television records.  Upon synchronization, they end up with two televisions installed in the same Cabinet at the same time, which is invalid.

Now, granted, this is a business logic decision that we have to come up with our own solution, which may or may not include an interactive "resolver" for the User of our application.  However, it is the nature of how to do this resolution that has got me baffled.  I can use the EnumMergeConflictCounts() to detect conflicts, and/or using the EnumConflictTables() to get where the conflicts exist, but I have not seen any RMO (or SMO, though unlikely for this scenario)  methods or classes that would allow for the following:

  • Retrieve per ArticleConflict object the list of actual Conflicts.
  • Once the User selects the "Winner" to instruct the PUblisher/Publication with the "correct" value.
  • Remove the Conflict record and "mark as resolved".

I realize, that for our system with these junction tables, we will need to handle the business logic ourselves in adjust our data with respect to the duplicate or additional "invalid" junction record and remove it or update it as necessary, but the resolution on the side of SQL Replication with changing the winner, and marking the conflict resolved is currently a mystery to me.

Thanks

Jaeden "Sifo Dyas" al'Reac Ruiner





"Never Trust a computer. Your brain is smarter than any micro-chip."
PS - Don't mark answers on other people's questions. There are such things as Vacations and Holidays which may reduce timely activity, and until the person asking the question can test your answer, it is not correct just because you think it is. Marking it correct for them often stops other people from even reading the question and possibly providing the real "correct" answer.

replication agent has not logged a progress

$
0
0

Hi expert,

I have problem with my transactional replication.

Below details info of my SQL Server.

OS = Windows Server 2008 R2 Enterprise

SQL Server = MS SQL Server 2008 R2

Two Servers (by the way I'm using Hyper-V VM)

1. SQL-1 is the publisher

2. SQL-2 is the subscriber

Right now I have this error message

The replication agent has not logged a progress message in 10 minutes. This might indicate an unresponsive agent or high system activity. Verify that records are being replicated to the destination and that connections to the Subscriber, Publisher, and Distributor are still active.

From replication monitor it was the Snapshot Agent that having this error.

Meanwhile Log Reader Agent status is 'running' with no error.

How to troubleshoot this error message, for me to reactivate my replication.

Let me know if you need further information/log.

Thanks in advance,

Afira Imra

The certificate cannot be dropped because it is bound to one or more database encryption key

$
0
0

Hi there,

I am trying to recreate a database from a live backup file that was provided to us.

Before I perform the restore, the database requires a cert to be applied because it is encrypted.

So i run the following script which references the correct key and cert files:

------------------------------------------------------

USE master;
DROP CERTIFICATE NCSServerCert
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'my password'
GO
CREATE CERTIFICATE NCSServerCert
    FROM FILE = 'E:\EncryptKey\NCSIS_Certificate.cer'
    WITH PRIVATE KEY (FILE = 'E:\EncryptKey\Key.pvk',
    DECRYPTION BY PASSWORD = 'my password');
GO

---------------------------------------------------------

When I run my script to apply the certificate, it fails saying:

Msg 15578, Level 16, State 1, Line 3

There is already a master key in the database. Please drop it before performing this statement.

Msg 15232, Level 16, State 1, Line 2

A certificate with name 'NCSServerCert' already exists or this certificate already has been added to the database.

So I tried dropping the cert before re-applying the script, but the message I get is as follows:

The certificate 'NCSServerCert' cannot be dropped because it is bound to one or more database encryption key.

At this point I am unsure as to how to proceed from here.

We need the live database recreated from this file so that we can do our comparison between UAT and live schemas

Any help would be appreciated,

Adam


TCP Provider: The semaphore timeout period has expired.

$
0
0

Hello,

I have a branch office (Server 2008 R2) server running SQL 2005 SP4 (9.0.5057) that is getting the following error message

Replication-Replication Distribution Subsystem: agent BRANCH1-JOBNAME-HEADOFFICE-10 scheduled for retry. TCP Provider: The semaphore timeout period has expired.

It is replicating data over VPN to our Head Office server (Server 2008 R2) running SQL Server 2008 R2 SP1 (Cluster). The connection between the two serves is stable. The issue arose about a week ago but had been working for over a year before that. 

Everything I seem to read points to a network issue but I haven't been able to find a cause of it is indeed network related.Any recommendations would be appreciated!

Thanks!


setting up Peer2peer replication problem

$
0
0

I'm trying to set up a 2 mode peer 2 peer configuration but I'm having a bit of a problem getting it to work.

This is the error I see:

Command attempted:
if @@trancount > 0 rollback tran
(Transaction sequence number: 0x0000804400001379009900000002, Command ID: 1)

Error messages:
Explicit value must be specified for identity column in table 'MSpeer_lsns' either when IDENTITY_INSERT is set to ON or when a replication user is inserting into a NOT FOR REPLICATION identity column. (Source: MSSQLServer, Error number: 545)
Get help: http://help/545
Explicit value must be specified for identity column in table 'MSpeer_lsns' either when IDENTITY_INSERT is set to ON or when a replication user is inserting into a NOT FOR REPLICATION identity column. (Source: MSSQLServer, Error number: 545)
Get help: http://help/545
The procedure sys.sp_MSaddpeerlsn failed to INSERT into the resource MSpeer_lsns.. Server error = 0. (Source: MSSQLServer, Error number: 21499)
Get help: http://help/21499

What I've done (using T-SQL) is create a backup from the "source db" and restore the database to each of the peer nodes.

Then search for the identity columns in the articles I'm going to replicate I reseed them so each node does not have overlapping identities, then create the publication, add the articles, and then the subscribers and finally the distributor agent.

I've been trying to find my mistake for a couple of days now, so hopefully someone here knows what I've missed or have done wrong.

Script template for creating the publications:

-- Enabling the replication database
use master
exec sp_replicationdboption @dbname = <@db>, @optname = N'publish', @value = N'true'

-- Check if the publication exists
USE Distribution
IF EXISTS (SELECT publication FROM MSpublications WHERE publication = '<@pubName>')
	BEGIN
		use [<@db>]
		EXEC sp_changedbowner 'sa'
		BEGIN TRY
			exec sp_removedbreplication N'<@pubName>' 
			exec sp_droppublication @publication = N'<@pubName>'
		END TRY
		BEGIN CATCH
		END CATCH
	END
-- Adding the transactional publication
use [<@db>]
exec sp_addpublication @publication = '<@pubName>'
	, @description = N'<@pubDescription>'
	, @sync_method = N'native'
	, @retention = 0
	, @allow_push = N'true'
	, @allow_pull = N'true'
	, @allow_anonymous = N'false'
	, @enabled_for_internet = N'false'
	, @snapshot_in_defaultfolder = N'true'
	, @compress_snapshot = N'false'
	, @ftp_port = 21
	, @ftp_login = N'anonymous'
	, @allow_subscription_copy = N'false'
	, @add_to_active_directory = N'false'
	, @repl_freq = N'continuous'
	, @status = N'active'
	, @independent_agent = N'true'
	, @immediate_sync = N'true'
	, @allow_sync_tran = N'false'
	, @autogen_sync_procs = N'false'
	, @allow_queued_tran = N'false'
	, @allow_dts = N'false'
	, @replicate_ddl = 1
	, @allow_initialize_from_backup = N'true'
	, @enabled_for_p2p = N'true'
	, @enabled_for_het_sub = N'false'

script(template) to add articles:

-- add articles to a publication (<@pubName>)
USE [<@db>]						
DECLARE @filter_articles BIT = <@filterFlag>;
DECLARE @articleCursor CURSOR, @artSchema nvarchar(50), @artName nvarchar(100)
DECLARE @schemaNname nvarchar(100);
IF @filter_articles = 1
	BEGIN
		SET @articleCursor = CURSOR FAST_FORWARD FOR						 
		SELECT object_schema_name(ao.object_id) as [schema], ao.name
		FROM sys.extended_properties as p
		INNER JOIN sys.all_objects as ao 
		ON ao.object_id = p.major_id
		WHERE p.value = 'p2p'
		AND OBJECTPROPERTY(ao.object_id,'TableHasPrimaryKey')= 1 						 
	END
ELSE
	BEGIN							 
		SET @articleCursor = CURSOR FAST_FORWARD FOR						 
		SELECT object_schema_name(ao.object_id) as [schema], ao.name
		FROM sys.all_objects as ao
		WHERE OBJECTPROPERTY(ao.object_id,'TableHasPrimaryKey')= 1 		
	END	
OPEN @articleCursor
	FETCH NEXT FROM @articleCursor INTO @artSchema, @artName
	WHILE @@FETCH_STATUS = 0
		BEGIN
			SET @schemaNname = @artSchema + '.' + @artName
			-- modify the identity seed if present
			IF EXISTS (
				SELECT c.name FROM sys.all_columns as c
					 INNER JOIN sys.all_objects as o ON o.object_id = c.object_id
					 INNER JOIN sys.schemas as s ON s.schema_id = o.schema_id
					 WHERE o.name = @artName AND s.name = @artSchema and c.is_identity = 1 
				)
				BEGIN
					DBCC CHECKIDENT (@schemaNname, RESEED, <@seed>);
				END
			exec sp_addarticle @publication =N'<@pubName>'
				, @article = @schemaNname
				, @source_owner = @artSchema
				, @source_object = @artName
				, @type = N'logbased'
				, @description = N''
				, @creation_script = null
				, @pre_creation_cmd = N'drop'
				, @schema_option = 0x0000000008037FDF
				, @identityrangemanagementoption = N'manual'
				, @destination_table = @artName
				, @destination_owner = @artSchema
				, @status = 16
				, @vertical_partition = N'false'
				, @ins_cmd = N'CALL sp_MSins_dbotrx'
				, @del_cmd = N'CALL sp_MSdel_dbotrx'
				, @upd_cmd = N'SCALL sp_MSupd_dbotrx'
			FETCH NEXT FROM @articleCursor INTO @artSchema, @artName
		END
	CLOSE @articleCursor
DEALLOCATE @articleCursor


subscriber script(template):

-----------------BEGIN: Script to be run at Publisher '<@publisher>'----------------- use [<@db>] -- check if the subscription exists declare @found as int EXEC sp_helpsubscription @publication = N'<@pubName>', @subscriber = N'<@subscriber>', @destination_db = N'[<@db>]', @found = @found OUTPUT; IF @found = 1 BEGIN -- if it exists drop it and re-create it. exec sp_dropsubscription @publication = N'<@pubName>' , @subscriber = N'<@subscriber>' , @destination_db = N'<@db>' , @article = N'all'; END exec sp_addsubscription @publication = N'<@pubName>' , @subscriber = N'<@subscriber>' , @destination_db = N'<@db>' , @subscription_type = N'Push' , @sync_type = N'replication support only'

, @article = N'all' , @update_mode = N'read only' , @subscriber_type = 0

Finally script(template) for adding the distributor agent:

exec sp_addpushsubscription_agent @publication =  N'<@pubName>'
	, @subscriber = N'<@subscriber>'
	, @subscriber_db = N'<@db>'
	, @job_login = N'<@agentUsername>'
	, @job_password = N'<@agentPassword>'
	, @subscriber_security_mode = 0
	, @subscriber_login = N'<@subscriberUsername>'
	, @subscriber_password = N'<@subscriberPassword>'
	, @frequency_type = 64
	, @frequency_interval = 0
	, @frequency_relative_interval = 0
	, @frequency_recurrence_factor = 0
	, @frequency_subday = 0
	, @frequency_subday_interval = 0
	, @active_start_time_of_day = 0
	, @active_end_time_of_day = 235959
	, @active_start_date = 20140404
	, @active_end_date = 99991231
	, @enabled_for_syncmgr = N'False'
	, @dts_package_location = N'Distributor'


Developer Frog Haven Enterprises

Transaction Log file growing hugely in Full recovery model although a regular log backup in place

$
0
0

Hello, 

One of my database is 46 GB and running on Full recovery model.

There is a transaction log backup in place which runs every 15 minutes. 

I've seen the transaction log file increases up to 8 GB every other day. I then manually have to shrink the transaction log file to release the unused space. 

I'm quite tempted to put a job in place to run the DBCC Shrink command everyday, however I've read on few of the forums that running DBCC Shrink regularly for a transaction log file is not recommended and could damage it. 

Any suggestion on this or any alternative solution?

Best regards, 

MC


Non-Admin access to Replication Article using ReplMonitor role not working?

$
0
0

Greetings, we are trying to get some non-sa users access to check replication monitor in our non-production environemnt, and I followed the article in the technet doc here:

http://technet.microsoft.com/en-us/library/ms151221.aspx

The user is an AD account, he can access the servers fine and is in the replmonitor role in the distrubution database on the distribution server which is separate from the publications on another sql server and separate to the subscriber database on another sql server. These are all running the same version of sql server 2008 r2 enterprise/developer. When I have the user try to start replication monitor, it will start and show the distributor, when I try to have them add a publisher, we get the error  "server xxxxx is neither a publisher nor a distributor, or you do not have permission to access replication functionality on this server".  Any idea what I am missing here as the article looks straight forward. Thanks

Merge Replication - Is recommended for Server to Server replicatin in 2 diferent stores ?

$
0
0

Good morning,

I have a client that needs to replicate a DB from 1 department store to another, and we are considering solutions or alternatives for keeping the same data in the 2 stores. They have at least 15 pos system in each one.

Is the merge replication of SQL Server 2012 a good alternative ?

The 2 stores need to modify for example the same product prices and thing like that, and sincronize that changes between them. They usually are conected via VPN.

Thank you very much for your answer

James

Replication for Sharepoint SQL Server

$
0
0

First, let me explain the situation: My company is planing to use Sharepoint for intranet and as replacement of file servers, so there will be lot of data inside the SQL databases. I'm the DBA and so I'm responsible to implement and administrator the SQL servers of the Sharepoint. There is one SQL servers (for Sharepoint and content) and my task will later be to implement a transactional replication to a second SQL server. We're using Sharepoint 2013 and SQL 2012 standard edtion.

Because I never implemented a replication before, I've tested this out in a test environment today. I was surprised there was so much drawbacks or at least obstacles:

* Every database need a own publication
* New tables will not be include in existing publications
* Only tables with a primary key can be replicated
* (...)

So I've asked the sharepoint administrator if there are later many additional database once sharepoint is installed, because I've to create a new publicatoin for each database. The answer was, only a new 'sharepoint site collection' would need an additional database and this would occur seldom. Sounds great. So I also asked if there are newly created tables during operation, because I would have the add those articles to existing publications. The sharepoint administrator wasn't sure about this. So:

1. Question: Are there many additional tables creating after sharepoint was installed during normal operation by users?

Since tables without primary key can't be replicated:

2. Question: Have all sharepoint tables a primary key?

(For those who like advice me to implement mirroring or AlwaysOn AvailbilityGroups instead, our company has already discussed this point and decided to use replication)

Thanks




Viewing all 4054 articles
Browse latest View live


Latest Images