Published on Thu, Nov 20, 2014
In the first place, we need to know about why such requirement needed. IMO, its absolutely a business necessity to maintain some kind of historical data on the target database(Slave Node). Especially, out of multiple slave nodes one of the slave node to retain the very first form of the data when it initially written into the database.
To accomplish this requirement, we should come up with some kind of filters like TRIGGERs/RULEs on Slave Node so that it avoids relaying DELETE and UPDATE statements. Since we are dealing with Slony-I, it doesn’t have such built-in mechanism to filter DML’s while replaying them on slave node though it has gathered all events from the Master node.(AFAIK Mysql,Oracle,SQL Server do support filters).
To get this straight, traditional Slony-I way maintains uniqueness of rows across all the nodes with its core concept of tables must have primary keys. In such architecture design, its hard to exclude DELETE/UPDATE statements, take an example of primary key column “orderid” of “orders” table has a first INSERT statement with value 100 and its been replicated as first form on filtered Slave Node. Later a DELETE statement executed for “orderid=100” and deleted row, now if any INSERT or UPDATE statement attempts to use the “orderid=100” then Slave node hits with duplicate key violation and it simple break the replication.
ERROR: duplicate key value violates unique constraint "reptest_pkey" DETAIL: Key (id)=(2) already exists. CONTEXT: SQL statement "INSERT INTO "public"."reptest" ("id", "name") VALUES ($1, $2);" ..... or .... CONTEXT: SQL statement "UPDATE ONLY "public"."reptest" SET "id" = $1 WHERE "id" = $2;" 2014-11-17 23:18:53 PST ERROR remoteWorkerThread_1: SYNC aborted
Thus, implementing rule not an issue yet one should be extremely cautious when its in place. In reality however applying these filters on Slony-I slave node are very fragile, especially application/developer should always keep this in mind any duplicate entry of row by INSERT OR UPDATE could break the replication.
As DML rules not possible alone with Slony-I, we can make use of PostgreSQL CREATE RULE…ON DELETE/ON UPDATE DO INSTEAD NOTHING and apply that RULE on table by ALTER TABLE…ENABLE REPLICA RULE to void DELETE/UPDATE statement. Using this option takes a lot of discipline, so you can ensure your application and staff members really follow these rules.
To continue with steps, you should have slony setup, on the off chance that you need to setup up you can refer to my past post here.
Steps on Slave Node (Master DB: postgres, Slave DB: demo, Port: 5432):
Stop slon daemons
Create ON DELETE and ON UPDATE DO INSTEAD NOTHING rule
demo=# CREATE RULE void_delete AS ON DELETE TO reptest DO INSTEAD NOTHING; CREATE RULE demo=# CREATE RULE void_update AS ON UPDATE TO reptest DO INSTEAD NOTHING; CREATE RULE
Apply RULE on table
demo=# ALTER TABLE reptest ENABLE REPLICA RULE void_delete; ALTER TABLE demo=# ALTER TABLE reptest ENABLE REPLICA RULE void_update ; ALTER TABLE
Start Slon daemons
Now, you can notice below that UPDATE/DELETE has no impact on Slave Node:
postgres=# delete from reptest where id =2; DELETE 1 postgres=# update reptest set id=2 where id=1; UPDATE 1 --On Master postgres=# select * from reptest ; id | name ----+------------ 2 | A (1 row) --On Slave demo=# select * from reptest ; id | name ----+------------ 1 | A 2 | C (2 rows)
If INSERT statement executed with value 1 then it will break the replication. Be noted…!!
Remember, there other ways to full-fill this request like dblinks, Triggers like BEFORE DELETE…return NULL value from function, but I believe the most efficient way would be to use RULE/ENABLE REPLICA RULE when you are working with Slony replication.
By now you might have read many blogs on Logical Decoding Replication slots new feature in PostgreSQL 9.4, hope in future it might include the concept of filter DMLs on Slave.
Thank you for visiting.