Creating an active-passive 2-node raspberry pi cluster for highly available squid proxy.

Our configuration implements active/passive architecture with shared nfs mount on our network attached storage (NAS).

The NAS hosts our squid disk-cache and squid configuration file in order to have one configuration file for the two nodes.

We need 2 raspberry pis.

We boot the first with raspbian and we follow the steps:

Configure hostname as "pi1"

Configure a static ip for pi1 (I prefer to do that on my dhcp)

Install (sudo apt-get install) squid3, corosync, pacemaker

Mount NAS

Configure squid (follow for details) to use the NAS mount point for cache and configuration file

Test the proxy

Stop squid3 service and configure not to start automaticaly at boot

to start automaticaly at boot Shutdown pi1

Clone the sd card of pi1

Use the clone to boot pi2

Configure hostname p2

Configure a static ip for pi2

Boot pi1

We now have two nodes ready for cluster configuration.

Corosync takes care of our cluster internal communications while Pacemaker manages our cluster resources (start, stop, monitor).

For corosync we are using the following configuration:

/etc/corosync/corosync.conf

compatibility: whitetank aisexec { # Run as root - this is necessary to be able to manage resources with Pacemaker user: root group: root } totem { version: 2 secauth: on threads: 0 rrp_mode: active token: 10000 transport: udpu interface { ringnumber: 0 bindnetaddr: 192.168 . 1.0 mcastport: 5405 ttl: 1 member { memberaddr: 192.168 . 1.101 ( Node 1 ) } member { memberaddr: 192.168 . 1.102 ( Node 2 ) } } } logging { fileline: off to_stderr: no to_logfile: yes to_syslog: yes debug: off logfile: / var / log / corosync / corosync . log timestamp: on logger_subsys { subsys: AMF debug: off } }

Our nodes are sharing the same squid configuration file and the squid disk-cache directory.

Therefore nfs mount to NAS need to be one of our managed cluster resources.

Furthermore we need to receive an informational mail when our resources are migrated between nodes.

So we have the following highly available resources for pacemaker to manage:

Cluster IP address

Proxy server

Informational e-Mail

Squid cache mount

In our cluster example the ip addresses are:

Node 1: pi1 192.168.1.101

Node 2: pi2 192.168.1.102

NAS: 192.168.1.100 (for shared file system)

Cluster IP: 192.168.1.253 (up on our active node)

The full pacemaker configuration:

node pi1 \ attributes standby = " off " node pi2 \ attributes standby = " off " primitive ClusterIP ocf : heartbeat : IPaddr2 \ params ip = " 192.168.1.253 " cidr_netmask = " 24 " \ op monitor interval = " 30 " primitive ProxyRsc1 ocf : heartbeat : Squid \ params squid_exe = " /usr/sbin/squid3 " squid_conf = " /mnt/a300/squid.conf " squid_pidfile = " /var/run/squid3.pid " squid_port = " 3128 " squid_stop_timeout = " 10 " debug_mode = " v " debug_log = " /var/log/cluster.log " \ op start interval = " 0 " timeout = " 5s " \ op stop interval = " 0 " timeout = " 10s " \ op monitor interval = " 20s " timeout = " 30s " primitive p_MailTo ocf : heartbeat : MailTo \ params email = " userid@example.com " \ op monitor interval = " 10 " timeout = " 10 " depth = " 0 " primitive share - cache ocf : heartbeat : Filesystem \ params device = " 192.168.1.100:/share " directory = " /mnt/a300 " fstype = " nfs " options = " user,rw,async,vers=3 " fast_stop = " no " \ op monitor interval = " 20s " timeout = " 40s " \ op start interval = " 0 " timeout = " 60s " \ op stop interval = " 0 " timeout = " 120s " \ meta is - managed = " true " target - role = " started " group ProxyAndIP ClusterIP share - cache ProxyRsc1 p_MailTo location prefer - pi1 ProxyRsc1 50 : pi1 order SquidAfterIP inf : ClusterIP share - cache ProxyRsc1 p_MailTo property $id = " cib-bootstrap-options " \ dc - version = " 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff " \ cluster - infrastructure = " openais " \ expected - quorum - votes = " 2 " \ stonith - enabled = " false " \ last - lrm - refresh = " 1441655139 " \ no - quorum - policy = " ignore " rsc_defaults $id = " rsc-options " \ resource - stickiness = " 200 "

You can check Custerlabs documentation for configuration options.

Some of the most useful commands are:

root@pi1 ~# crm status

============

Last updated: Wed Oct 28 12:29:11 2015

Last change: Sat Oct 3 23:18:50 2015 via crm_attribute on pi1

Stack: openais

Current DC: pi1 - partition with quorum

Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Online: [ pi1 pi2 ]

Resource Group: ProxyAndIP

ClusterIP (ocf::heartbeat:IPaddr2): Started pi1

share-cache (ocf::heartbeat:Filesystem): Started pi1

ProxyRsc1 (ocf::heartbeat:Squid): Started pi1

p_MailTo (ocf::heartbeat:MailTo): Started pi1

root@pi1 ~# crm resource move ProxyRsc1 pi2 # Move resources to the other node for pi1 maintainance

root@pi1 ~# crm resource unmove ProxyRsc1 # Give the control back to the cluster

root@pi1 ~# crm configure show # Show configuration

root@pi1 ~# crm_mon # Live monitoring

root@pi1 ~# crm_resource -P # Cleaning "Failed actions" messages

The last thing you need to do is to configure the cluster IP address as your new proxy server on all your devices.