Articles on this Page
- 04/07/13--09:38: _Preparing a model o...
- 09/14/12--11:10: _Delete the old fold...
- 04/15/13--12:30: _Is SQL Server Snaps...
- 04/15/13--10:02: _SQL Server Transact...
- 04/15/13--21:27: _My subscribptions d...
- 04/11/13--14:35: _Error on the primar...
- 04/16/13--00:03: _SQL Server Web Repl...
- 04/15/13--23:07: _How to get the Arti...
- 04/16/13--13:39: _How to update the ...
- 04/09/13--05:00: _enabling log shipping
- 04/17/13--12:25: _sqlce 28037 error -...
- 04/18/13--03:42: _Web synchronization...
- 04/17/13--22:49: _how to works the re...
- 04/17/13--22:30: _replication
- 04/19/13--00:44: _sp_changearticle pr...
- 04/19/13--14:27: _Triggers delete for...
- 04/19/13--23:51: _metadata performanc...
- 04/21/13--11:17: _Log Shipping - Rest...
- 04/21/13--23:36: _The replication age...
- 04/21/13--22:50: _In which replicatio...
- 09/14/12--11:10: Delete the old folder from distributor after snapshot replication
- 04/15/13--21:27: My subscribptions don't make it to subscriber
- 04/11/13--14:35: Error on the primary server
- 04/16/13--00:03: SQL Server Web Replication - SSL Configuration
- 04/15/13--23:07: How to get the Article name using business logic handlers
- 04/16/13--13:39: How to update the secondary server table in logshipping
- 04/09/13--05:00: enabling log shipping
- 04/18/13--03:42: Web synchronization with IIS on separate remote server
- 04/17/13--22:49: how to works the replication in sql server 2008
- 04/17/13--22:30: replication
- 04/19/13--23:51: metadata performance issue
- 04/21/13--11:17: Log Shipping - Restoring file from Share Question - SQL 2008 R2
Using Transactional replication.
I would like to subscribe to a publication in order to transfer all the objects (tables, indexes, PKs) but not bring over the Snapshot of the data (saving tons of time).
To that model I would then add all our non replicated objects (derived Tables, additional indexes, etc)
I would then use that model as a means of comparing to an existing DB in order to ID any structural Diffs that need to be scripted.
Someone mentioned not implementing the SPs for Insert/Update/Delete but I don't think that would prevent the initial Snapshot from being applied and that Snapshot has a ton of data.
Is there any way to transfer the structure but not the Snapshot of the Data to a subscriber?
In the distribution database, after each publication, replication is created for individual database. I have scheduled hourly snapshot replication, so every hour a folder is created with
unique number. How do I delete the folder after a day ones I backup the folder. I am using sqlserver 2008 and windows server 2008R2.
Can I use anything with maintenance plan?
I am on my way to becoming a SQL guru but until then, I have been recently through alot of setups and created lots of replication and subscriptions and deleted them as well trying to figure the best way to set up sql server 2k8 r2 database replication. In the process, i have learnt alot regarding replication mostly not all fun stuff about replication.(the whole environment, i inherited and since they all work, im not breaking anything ;-])
Eventually, I decided transactional replication wont work for us since some of our tables are missing primary keys and mirroring isnt exactly our intention.
We just want to be able to have at least a day old copy of the dbs that we can point/redirect applications to manually and relatively quickly and these includes a range of custom dbs to sharepoint dbs.
So, my question ...the snapshots pushed out by the publisher, are they differentials/incrementals(such that changes are merged to parked snapshots) or a full snapshot is taken every time the distribution agent runs?
Better still, what other ways can I achieve my goal?
I have a pretty basic SQL Server transaction replication set up for a number of databases that is working fine. Every now and then, I need to add some new stored procedures or tables to the publisher. In order to replicate the new articles, I need to do this manually through the management studio - when I do so, the next time the snapshot engine runs, I get only one new generated snapshot article for each article I added. Perfect!
The problem I'm having is doing this through a script; I am able to add tables and stored procedures to the publisher without issue using sp_addarticle, but when the snapshot engine runs, I get no additional snapshot articles. If, I go into the publication using the management studio, uncheck one of the articles I added via script, exit out, go back in and add that article again, the next time the snapshot engine runs, I get ALL of snapshot articles - the ones I added through the script along with the one I used to force the situation.
Clearly I'm missing a step - when I compare the Snapshot agent details between a run that gives me what I'm looking for and one that does not, the magic line seems to be "Activated articles for publication 'abcde' at the publisher." When I manually add at least one of the articles, I get this line in the details. If I add everything via script, I do not.
Can someone tell me what step I'm missing? For a minute, I thought it might be sp_reinitsubscription but I don't think so anymore because if I manually reinitialize the subscription through SSMS, I don't get the new articles. It really feels like I've got to do something on the publication side of things.
Thanks in advance!
Numerous errors about not being able to drop a table because it's in replication. No one has dropped it; it's there. Any ideas about what can be wrong. It's out 28 hours up and we feel punchy.
Command attempted: drop Table [dbo].[WS_EMP_COV] (Transaction sequence number: 0x00473D340000452202B600000000, Command ID: 307) Error messages: Cannot drop the table 'dbo.WS_EMP_COV' because it is being used for replication. (Source: MSSQLServer, Error number: 3724) Get help: http://help/3724 Cannot drop the table 'dbo.WS_EMP_COV' because it is being used for replication. (Source: MSSQLServer, Error number: 3724) Get help: http://help/3724ANy help greatly appreciated.
Configured log shipping using the following procedure
I have enabled logshipping for 2 databases. The wizard created LSAlert job. Error: Executed as user: NT AUTHORITY\NETWORK SERVICE. The log shipping primary database macinename.AbraHRMS_Live has backup threshold of 60 minutes and has not performed a backup log operation for 78 minutes. Check agent log and logshipping monitor information. [SQLSTATE 42000] (Error 14420). The step failed.
However the LSBackup job for the database has no error. This is for only primary server database where I see error.
The secondary server has no error in the view history and Transaction Log Shipping status.
Why do I get error in the primary server?
For both databases, I have enabled log shipping and I backup at the same time @12 AM, copy after 15 mints @12:15 AM and restore after 15 mints @ 12:30 AM. Is this right?
I am trying to configure SQLServer Web Replication in my IIS 7.5 and test this over the internet (between different domains, no VPN). I have one doubt regarding the configuration of SSL certificate.
I created new website named wrenchsql.com and created the virtual directorywrench. I configured on public ip to this site. Now i can access the virtual directory publicly using this ip. I think it is not possible to configure web replication using public IP, it needs domain name. What are the steps to make it live ? If I purchase a domain with namewrenchsql.com and attached to my ip , is it possible configure web replication ? Do I need to purchase the SSL certificate for the URL ?
If a post answers your question, please click "Mark As Answer" on that post and "Mark as Helpful
In business logic handlers, I have separated all handlers in one class(saparate namespace) apart from insert Handlers and updateHandlers.
Those handlers are in the common namespace , which all have the common logic for all articles. (for code optimization)
Here how I can ensure for which aritcle its calling this handlers?
And Using dataset I can get the record values only not able to get table/article name .
<< How can I get the table name within the handlers>>
Backup, copy and restore is working without any errors in logshipping and the transactional log status report also does not have any errors in the secondary server. However the primary server shows alert message, and it is showing wrong backup file. It is not writing the recent backup file. So some where the table is not getting updated. Since the alert is mentioning about the secondary server, I would think that the issue is with the secondary server. So how do I update the secondary server and which table needs to be updated.
In the primary server dbo.logshipping_primary_databases is showing old backup date.
i am doing testing of log shipping environment in the event of failure at primary, created script to disable job on secondary, applied last backup log and able to bring db on line in secondary server. need to enable log shipping on my primary when primary server come online. even after backup job in primary and restore job in secondary server disabled, database property on primary still shows log shipping enabled. will enabling job again will start log shipping or any other setting i need to do
First of all sorry for my english. I will try to be the most clear I can :)
I ve a major trouble using sqlce 3.5 to sync against a domain member server. Here is the scenario :
- DC 2008 R2 with CA entreprise root service which issue certificats for all my domain, using sha1 signature. I know it could be an issue, but as I will explain I exclude it.
One member server is used for all the database and sync feature :
- SQLServer 2005 ent. with distribution, publication and subscriber enable.
- IIS installed, with SQLCE 3.5 sp2 and a website configured for the end device web sync
- computers running C# program doing sqlce 3.5 sync, with windows authentication throught SSL with a IIS certificat issued by my 2008 CA
What is working :
- [mywebsite]/sqlcesa35.dll?diag all is green, except 10.0 Database Reconciler because I don't have 2008 installed. It works from any computer, domain member or not.
- sync from any domain member from the LAN and the internet
I use a domain user who has right in the database, the website and the share containing the data to replicate.
- CA root certificat install on untrusted computer.
- SSL server certificat signature trusted by installed root certificat
What is not running
- sync from untrusted computer, even in the LAN.
Every time I sync, I get a 28037 sqlce error.
As I ve found and understand, I can exclude the SHA1 error because domaine computer are able to work with SSL connection, even if the ca root cert use the sha1 signature. maybe being an untrusted computer could cause a trouble but I don t thing.
I ve also tested with sql managment tool, from untrusted computer with the relevant .sdf file and proper subscription already configured but empty (with a blank initial sync), and I still getting the 28037 error when I start the sync from the computer.
If someone has an idea to deal with domain and unstrusted computer, you will help me more than you think
I am attempting to set up a merge pull subscription using web synchronisation, SQL Server 2012 and Windows Server 2012. I have the publisher and distributor on one server and IIS on another completely separate server, they are not on the same LAN and there is no VPN between them. I will have SQL Server 2012 Express subscribers.
Do I need a VPN between the publisher/distributor and IIS?
On my diag page https://replweb.products2web.co.uk/SQLReplication/replisapi.dll?diag CLSID_SQLReplErrors has status failed and replrec.dll classes also has status failed, with the same error code 0x80040154 for both.
I believe IIS needs to be able to see the snap shot folder and all the documentation I have read says you must use a UNC path to this, but since my servers are separate this isn't possible as they are not on the same network, which leads me to wonder if I need a VPN between them.
server system contains ERP Database is there. this is erp data entering data on daily basis.
i want these server erp db in another server. i want same db with same data in another server daily basis.
i have enter data in erp this goes to erp datadase in server , sametime these db updated in another server or machine.
WHAT IS CENTRALIZED PUBISHER AND EXPLAIN?
The following link as well as BOL says that changing the property of pre_creation_cmd using sp_changearticle requires only @force_invalidate_snapshot to be set to true. The option @force_reinit_subscription is not required to be set to true.
Look at the remarks section of the article.
However when I execute the proc with out @force_reinit_subscription to true, it errors out to me.
"Msg 20608, Level 16, State 1, Procedure sp_MSreinit_article, Line 190
Cannot make the change because there are active subscriptions. Set @force_reinit_subscription to 1 to force the change and reinitialize the active subscriptions."
This is what I am executing. Please suggest what can I change so that I dont have to reinitialize the article again. This is a huge table having more than 200GB of data. There are five such tables that I need to change. As per my concepts
since I am not making any change to the existing replication subscriptions and this change of pre_creation_cmd will only apply when I reinitialize the subscription, I should not required the @force_reinit_subscription set to true.
Estimado quisiera saber como recuperar unos triggers eliminados de la replicacion los triggers eliminados tenian de nombre MSmerge_del; MSmerge_Ins; MSmerge_upd, o en su defecto recuperar todos los triggers de una repliacion.
Desde ya gracias.
Any idea ? Why the following table grow very fast ? and what are the available configurations for a merge publication so it will less metadata as much as possible.
Fellow SQL'ers, I am learning about log shipping. I understand the concept but am wondering how does a secondary server know which file to load during the restore when other previously processed trans files are sitting in that folder - yet to
i get this Error message in my P2P Replication. I read all entries in this forum to solve the problem, but nothing happens.
the last Start Time of the distribution agent was 19/04 at 1:50 PM and i can restart all agents and its still not changing.
The way from the other Node to this Node runs perfectly (read and write).
I got this Error random once and from this point the replication throws the Error. The replication ran before over 5 days.
There are no blocked SPIDs and the Heartbeat interval is changed as well...
Is there any Chance to reborn the replication ??? i get no clue in it.
In which replication Distributor and subscriber not run under same instance. While configuring subscriber it is raising an error message like the instane having Distributor and will not allow any subscriber to this instance.
Please help me