System/PostgreSQL

From LunaSys
Jump to navigation Jump to search

Admin

VACUUM FULL ANALYZE

Replication

Statement based replication

System/PostgreSQL/Londiste

Built-in Replication

Adapted from The official doc here.

1. Set up connections and authentication so that the standby server can successfully connect to the replication pseudo-database on the primary.

$ vim postgresql.conf
listen_addresses = '192.168.0.10'
$ vim pg_hba.conf
# The standby server must have superuser access privileges.
host  replication  postgres  192.168.0.20/22  trust

2. Set up the streaming replication related parameters on the primary server.

$ vim postgresql.conf
wal_level = hot_standby
max_wal_senders = 5
wal_keep_segments = 32
archive_mode    = on
archive_command = 'rsync -a %p standby:/path_to/archive/%f'

3. Start postgres on the primary server.

4. Make a base backup by copying the primary server's data directory to the standby server.

$ psql -c "SELECT pg_start_backup('label', true)"
$ rsync -a ${PGDATA}/ standby:/srv/pgsql/standby/ --exclude postmaster.pid
$ psql -c "SELECT pg_stop_backup()"

5. Set up replication-related parameters, connections and authentication in the standby server like the primary, so that the standby might work as a primary after failover.

6. Enable read-only queries on the standby server. But if wal_level is archive on the primary, leave hot_standby unchanged (i.e., off).

$ vim postgresql.conf
hot_standby = on

7. Create a recovery command file in the standby server; the following parameters are required for streaming replication.

$ vim recovery.conf
# Note that recovery.conf must be in $PGDATA directory.
standby_mode          = 'on'
primary_conninfo      = 'host=192.168.0.10 port=5432 user=postgres'
trigger_file = '/path_to/trigger'
restore_command = 'cp /path_to/archive/%f "%p"'

8. Start postgres in the standby server. It will start streaming replication.

9. You can calculate the replication lag by comparing the current WAL write location on the primary with the last WAL location received/replayed by the standby. They can be retrieved using pg_current_xlog_location on the primary and the pg_last_xlog_receive_location/pg_last_xlog_replay_location on the standby, respectively.

$ psql -c "SELECT pg_current_xlog_location()" -h192.168.0.10 (primary host)
pg_current_xlog_location 
--------------------------
0/2000000
(1 row)
$ psql -c "select pg_last_xlog_receive_location()" -h192.168.0.20 (standby host)
pg_last_xlog_receive_location 
-------------------------------
0/2000000
(1 row)
$ psql -c "select pg_last_xlog_replay_location()" -h192.168.0.20 (standby host)
pg_last_xlog_replay_location 
------------------------------
0/2000000
(1 row)

10. You can also check the progress of streaming replication by using ps command.

The displayed LSNs indicate the byte position that the standby server has written up to in the xlogs.

[primary] $ ps -ef | grep sender
postgres  6879  6831  0 10:31 ?        00:00:00 postgres: wal sender process postgres 127.0.0.1(44663) streaming 0/2000000
[standby] $ ps -ef | grep receiver
postgres  6878  6872  1 10:31 ?        00:00:01 postgres: wal receiver process   streaming 0/2000000


  • How to do failover

Create the trigger file in the standby after the primary fails.

  • How to restart streaming replication after failover

Repeat the operations from 4th; making a fresh backup, some configurations and starting the original primary as the standby. The primary server doesn't need to be stopped during these operations.

  • How to restart streaming replication after the standby fails

Restart postgres in the standby server after eliminating the cause of failure.

  • How to disconnect the standby from the primary

Create the trigger file in the standby while the primary is running. Then the standby would be brought up.

  • How to re-synchronize the stand-alone standby after isolation

Shut down the standby as usual. And repeat the operations from 4th.

If you have more than one slave, promoting one will break the other(s). Update their recovery.conf settings to point to the new master, set recovery_target_timeline to 'latest', scp/rsync the pg_xlog directory, and restart the slave.