Download

Session Manager Failover Guide

Preface

The purpose of this document is to provide a failover mechanism for the Inuvika OVD Session Manager role (OSM) and the additional roles that may be hosted on the OSM servers.

History

Version Date Comments
1.5 2017-11-09 Upgrades for 2.5, add better support for sessions sharing and database.
1.4 2017-07-18 Reformatting
1.3 2016-11-23 Updated document for CentOS 7
1.2 2016-07-11 Updated document for CentOS 6
1.1 2015-09-14 Updated document to new format and added clarifications.
1.0 2015-01-23 Published the first version of the document

Introduction

The purpose of this document is to provide a description of an example failover mechanism that can be deployed for the Inuvika OVD Session Manager role (OSM) and for the additional roles that may be hosted on the OSM servers. The mechanism is based on a master/slave architecture. The example is based on using the Linux Heartbeat daemon and synchronizing the data used in the OVD Session Manager between the master and the slave nodes. Other mechanisms may be employed to achieve a similar result.

Overview

The example described in this document is based on a simple OVD architecture to explain each step of the failover implementation. The architecture is depicted in the following figure:

OVD Architecture

The OSM1 server serves all OVD farm requests and tasks from its network interface eth0 which hosts the internal and virtual IP address. In the example architecture, these are 192.168.0.201 and 192.168.0.200 respectively. OSM1 is statically set as the master server in the cluster.

For more convenience, the VIP can be registered in the enterprise DNS server. In the example, the VIP is registered as osm-vip.inuvika.demo.

The Heartbeat daemon which runs on both OSM servers, broadcasts over a dedicated network accessed using eth1. When the Heartbeat daemon detects that OSM1 has failed, it will switch the virtual IP address to the OSM2 server. The resulting downtime will be no more than a few seconds.

When the OSM1 server has been recovered and is available again, the Heartbeat daemon will switch the virtual IP address back to OSM1.

Although not covered in this document, it is possible to use an external MySQL clustered solution instead (See section 7 MySQL External Cluster Integration).

The instructions in this document cover Ubuntu 16.04 LTS (Xenial Xerus), Ubuntu 14.04 LTS (Trusty Tahr), RHEL 7, and CentOS 7. For other Linux distributions, there may be differences in the actual details of the implementation. However, the architectural approach will remain the same.

Pre-requisites

In order to implement a failover mechanism, the following pre-requisites should be in place.

  • An Inuvika OVD Enterprise subscription key valid for both session managers in the setup. Please contact your local Inuvika Reseller Partner in order to initiate a replacement key request for both session managers. If you are not currently working with an Inuvika Reseller Partner, please contact your Inuvika representative directly, or submit the request form on https://www.inuvika.com/getakey.
  • Good Linux administration skills. The solution is not trivial to implement and is best handled by an experienced administrator.
  • Two Linux servers with only the default server installation (plus SSH server).
  • 2 network interfaces on each server:
    • eth0 attached to the enterprise network with a static IP address
    • eth1 on a dedicated Heartbeat network between both the OSM servers
    • NOTE: In Ubuntu 16.04, eth interface names have been deprecated. Please note the naming format your system is using and substitute it in for any references to eth throughout this document. You can use the ifconfig command to find your interface names. They will most likely begin with the 'en' prefix.
  • A network time server to synchronize all servers with it. This is mandatory for the solution to work successfully.
  • A DNS server to register servers and VIP DNS names as FQDNs (fully qualified name)
  • A Directory server (optional)

The installation of the required OSM and MySQL components will be described in this document. The Linux servers need have only a default installation to meet the pre-requisites.

Resources required in the document are available in failover_scripts.zip

Network Configuration

This section describes the network configuration that

DNS Configuration

If a DNS server is not available, the HOSTS file must be properly set up for all hosts that participate in the OVD server farm.

Ubuntu LTS

In the example, the network configuration as defined in the /etc/network/interfaces file for each OSM is displayed below:

OSM1:

iface eth0 inet static
    address 192.168.0.201
    netmask 255.255.255.0
    gateway 192.168.0.1
    dns-nameservers 192.168.0.199
    dns-search inuvika.demo

OSM2:

iface eth0 inet static
    address 192.168.0.201
    netmask 255.255.255.0
    gateway 192.168.0.1
    dns-nameservers 192.168.0.199
    dns-search inuvika.demo

RHEL / CentOS 7.x

In the /etc/resolv.conf file, add for each OSM

OSM1:

search inuvika.demo
nameserver 192.168.0.199

OSM2:

search inuvika.demo
nameserver 192.168.0.199

Checking DNS

Check that the DNS server is resolved correctly by using the nslookup command. Similar testing should be conducted on each OSM machine.

For example the command below entered on OSM1:

nslookup 192.168.0.199

should result in the following:

Server: 192.168.0.199
Address: 192.168.0.199#53

199.0.168.192.in-addr.arpa name = dc.inuvika.demo.

Hosts File Configuration

The HOSTS file must be set up correctly on each OSM server, even when a DNS server is available.

Edit the /etc/hosts file on OSM1 and OSM2

nano /etc/hosts

The contents of the file based on this example should be as follows:

OSM1:

127.0.0.1 localhost
127.0.1.1 osm1
192.168.0.201 osm1.inuvika.demo osm1
192.168.0.202 osm2.inuvika.demo osm2
192.168.0.200 osm-vip.inuvika.demo osm-vip

OSM2:

127.0.0.1 localhost
127.0.1.1 osm2
192.168.0.202 osm2.inuvika.demo osm2
192.168.0.201 osm1.inuvika.demo osm1
192.168.0.200 osm-vip.inuvika.demo osm-vip

Configuring the Heartbeat Interface

Ubuntu LTS

To configure the network settings to support the dedicated Heartbeat network, add the settings for eth1 to the /etc/network/interfaces file on each OSM.

OSM1:

auto eth1
iface eth1 inet static
    address 10.0.0.1
    netmask 255.255.255.0

OSM2:

auto eth1
iface eth1 inet static
    address 10.0.0.2
    netmask 255.255.255.0

Then restart the system in each case to apply the changes.

reboot

Upon reboot, running the ifconfig command displays eth0 and eth1. Check that the details are correct before continuing.

RHEL / CentOS 7.x

To configure the network settings to support the dedicated Heartbeat network, add the settings to the /etc/sysconfig/network-scripts/ifcfg-eth1 file on each OSM.

OSM1:

DEVICE=eth1
BOOTPROTO=manual
ONBOOT=yes
IPADDR=10.0.0.1
NETWORK=10.0.0.0
NETMASK=255.255.255.0

OSM2:

DEVICE=eth1
BOOTPROTO=manual
ONBOOT=yes
IPADDR=10.0.0.2
NETWORK=10.0.0.0
NETMASK=255.255.255.0

Then restart the system in each case to apply the changes.

reboot

Upon reboot, running the ifconfig command displays eth0 and eth1. Check that the details are correct before continuing.

MySQL installation and configuration

This section covers the MySQL installation on a dedicated server. If you prefer to use an external MySQL clustered solution, refer to Section 7: MySQL External Cluster Integration and then move on to section 5.

The MySQL installation is based on the official Inuvika OVD installation instructions.

Installation and Configuration

  1. Install MySQL

    • Ubuntu LTS

      apt-get update
      apt-get install mysql-server mysql-client
    • RHEL / CentOS 7.X

      yum install mariadb mariadb-server
      chkconfig mariadb on
      service mariadb start
      mysqladmin -u root password 'mysql_root_password'
  2. Create the OVD Database with the name ovd

    mysql -u root -p -e 'create database ovd'
  3. Disable the firewall or add a rule to authorize the communication on the tcp port 3306.

  4. Open a MySQL session as "root":

    mysql -u root -p
  5. Add the OSM servers to be able to connect remotly on the MySQL server

    mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'osm1' IDENTIFIED BY 'root';
    mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'osm2' IDENTIFIED BY 'root';
    mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'osm-vip' IDENTIFIED BY 'root';
    mysql> FLUSH PRIVILEGES;
  6. Restart the MySQL service for the changes to become active.

    • Ubuntu LTS

      service mysql restart
    • RHEL / CentOS 7.X

      systemctl restart mariadb

Cluster node configuration

The next step is to install the OVD components and establish the notification mechanism which is based on INotify.

Installing OVD Components

The OVD Session Manager (OSM), OVD Administration Console (OAC) and OVD Web Access (OWA) should be installed on OSM1 and OSM2. For instructions on installing these components, please follow the latest OVD installation instructions available in the Installation and Configuration document at https://archive.inuvika.com/ovd/latest/documentation

Warning

The same Session Manager administration account and password must be created on both OSM1 and OSM2.

Configuring sessions sharing

Using multiple OSM servers require to share sessions information between servers in order to prevent disconnections.

An additional module must be installed on each OSM server which will be in charge of sharing those sessions informations.

  • Ubuntu LTS

    • Install the modules

      apt-get install php-memcache memcached
    • Edit the configuration file

      nano /etc/memcached.conf
    • Change the line by replacing with the OSM ip.

      -l 127.0.0.1
    • Restart the service

      service memcached restart
    • Edit the Apache configuration file

      nano /etc/php/7.0/apache2/php.ini
    • And add the following

      session.save_handler = memcache
      session.save_path = 'tcp://192.168.0.201:11211,tcp://192.168.0.202:11211'
    • Edit the memcache mode configuration file

      nano /etc/php5/mods-available/memcache.ini
    • Add at the end of the file

      memcache.allow_failover=1
      memcache.session_redundancy=4
    • Restart Apache

      service apache2 reload
  • Centos / RHEL 7

    • Install the modules

      yum install php-pecl-memcache memcached
    • Edit the configuration file

      vi /etc/sysconfig/memcached
    • Change OPTIONS value with the OSM ip.

      OPTIONS="X.X.X.X"
    • Change CACHESIZE by

      CACHESIZE="1GB"
    • Restart the service

      service memcached restart
    • Edit the Apache configuration file

      vi /etc/httpd/conf.d/php.conf
    • And add the following

      php_value session.save_handle "memcache"
      php_value session.save_handler "tcp://192.168.0.201:11211,tcp://192.168.0.202:11211"
    • Edit the memcache mode configuration file

      vi /etc/php.d/memcache.ini
    • Add at the end of the file

      memcache.allow_failover=1
      memcache.session_redundancy=4
    • Restart Apache

      service httpd reload

Inotify Installation and Configuration

The next step is to install the INotify software package on OSM1 and OMS2:

  • Ubuntu LTS

    apt-get install liblinux-inotify2-perl
  • RHEL / CentOS 7.x

    yum install http://download.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
    yum install perl-Linux-Inotify2

SSH Key Management

The SSH Key Management must be setup to allow communication between the two nodes using rsync without the system requesting SSH key validation.

SSH Key Configuration

OSM1 server
  1. Generate an RSA key pair on OSM1:

    ssh-keygen -t rsa

    Press enter to accept all defaults.

  2. Create a .ssh folder on OSM2 remotely:

    ssh root@10.0.0.2 mkdir -p .ssh
  3. Transfer the OSM1 SSH public key to OSM2:

    cat /root/.ssh/id_rsa.pub | ssh root@10.0.0.2 'cat >>.ssh/authorized_keys'
OSM2 server
  1. Generate an RSA key pair on OSM2:

    ssh-keygen -t rsa

    Press enter to accept all defaults.

  2. Create a .ssh folder on OSM1 remotely:

    ssh root@10.0.0.1 mkdir -p .ssh
  3. Transfer the OSM2 SSH public key to OSM1:

    cat /root/.ssh/id_rsa.pub | ssh root@10.0.0.1 'cat >>.ssh/authorized_keys'

Verifying the SSH Configuration

Using SSH should not prompt the user to accept a key request if the configuration has been performed successfully. To verify that this is the case, on OSM1 run the following command:

ssh root@10.0.0.2

No password should be requested. If ok, enter "exit" to quit.

Then on OSM2, run the following command:

ssh root@10.0.0.1

No password should be requested. If ok, enter "exit" to quit.

If the test passes on both nodes, then the SSH key management is correctly configured.

Warning

rsync uses the hostname stored within the generated key which in this case is 10.0.0.X

Inotify Script Installation and Configuration

The Perl script named osm-inotify.pl that is provided with this documentation must be copied into the /sbin directory on both OSM servers. The script will be started by the Heartbeat daemon and will detect if an OSM server has failed.

  1. Once the file has been copied, make the file executable as follows:

    chmod +x /sbin/osm-inotify.pl
  2. Edit the script content so that it corresponds to the OSM configuration. On the OSM1 server use the OSM2 Heartbeat IP address:

    # REMOTE SERVER IP
    $rip = "10.0.0.2";

    On the OSM2 server use the OSM1 Heartbeat address:

    # REMOTE SERVER IP
    $rip = "10.0.0.1";

Warning

Do not copy/paste the script content into the Linux server. It is recommended that a dedicated tool, such as WinSCP, is used on Windows to avoid corrupting the file.

OVD Administration Console

The next step is to edit the MySQL settings on OSM1 and OSM2 in the OVD Administration Console.

  1. Connect to the Administration Console on OSM1 using a browser and enter the URL http://OSM1/ovd/admin/

  2. Enter the MySQL username and password in the SQL Configuration page. The page should be displayed by default upon first login since the OVD database has not been configured at this stage. Change the Database host address with the one used for the dedicated MySQL server/cluster.

    Admin Console DB configuration

  3. Perform the same operation on OSM2.

  4. Both OSM servers must use the same SSL certificate for the failover to be successful. To achieve this, copy the SSL certificate from OSM1 to OSM2.

    • Ubuntu LTS

      scp /etc/ssl/certs/ssl-cert-snakeoil.pem 10.0.0.2:/etc/ssl/certs/
      scp /etc/ssl/private/ssl-cert-snakeoil.key 10.0.0.2:/etc/ssl/private
    • RHEL / CentOS 7.X

      scp /etc/pki/tls/certs/localhost.crt * 10.0.0.2:/etc/pki/tls/certs/
      scp /etc/pki/tls/private/localhost.key * 10.0.0.2:/etc/pki/tls/private/

The OVD configuration of the cluster is now complete.

Heartbeat/Pacemaker Configuration

The remaining step is to install and configure a cluster infrastructure (communication and membership) service. This allows clients to know about the presence and disappearance of peer processes (in the case of OVD the OVD Session Manager) on other machines.

Several options are available for Linux distributions but this document only covers one option per system: Hearbeat for Ubuntu Trusty and Pacemaker / Corosync for EL7 (Red Hat 7.x, CentOS 7.x).

Please follow section 6.1 and afterwards choose either 6.2 (heartbeat) or 6.2 (pacemaker) depending on your distribution.

Package Installation

The Heartbeat package must be installed on OSM1 and OSM2:

  • Ubuntu LTS

    apt-get update
    apt-get install heartbeat
  • RHEL / CentOS 7.X

    yum -y install corosync pacemaker pcs

Heartbeat Configuration

This section covers Ubuntu 16.04 LTS (Xenial Xerus) and Ubuntu 14.04 LTS (Trusty Tahr).

Warning

This section is not for RHEL 7.x and CentOS 7.x.

Please follow Pacemaker configuration for RHEL 7.x and CentOS 7.x.

There are 3 files to configure for the Heartbeat package. These files will be created in the /etc/heartbeat directory. This directory is a symlink to /etc/ha.d.

Configuration files

The files are created on OSM1 ONLY (the master node) and are:

  • authkeys

  • haresources

  • ha.cf

Configuration File
  1. Edit the Heartbeat configuration file:

    nano /etc/heartbeat/ha.cf
  2. Add the following content:

    autojoin none
    logfile /var/log/heartbeat.log
    logfacility daemon
    node osm1 osm2
    keepalive 2
    warntime 5
    deadtime 15
    bcast eth1
    ping 192.168.0.1
    auto_failback yes

    node osm1 osm2: the order is important as it is used to set the master node. In this case OSM1 will always be the master server

    ping 192.168.0.1: the address of a ping gateway. This test pings the network gateway to check network availability. In this example the network gateway is 192.168.0.1.

    bcast eth1: the Heartbeat daemon will broadcast through the dedicated interface which in this example is eth1

    auto_failback yes: When the master node (OSM1 in this example) has been recovered after a failure, the Heartbeat daemon will revert the virtual IP address back to the master.

authkeys Configuration File

The authkeys file contains pre-shared secrets used for mutual cluster node authentication. It should only be readable by root and follows this format:

auth num
num algorithm secret

num is a simple index, starting at 1. Usually, there will only be one key in the authkeys file.

algorithm is the name of signature algorithm used. The options are either md5 or sha1. It is recommended to not use a crc (a simple cyclic redundancy check) which is not secure.

secret is the actual authentication key.

The authkeys file can be created using a randomly generated secret. The following commands will achieve this task:

  1. Generate SHA1 key:

    dd if=/dev/urandom bs=512 count=1 2>/dev/null | openssl md5

    The output should be similar to:

    (stdin)= 1ff0cde062fc435a4b2f039c71e57271
  2. Create the authkeys file by editing the file:

    nano /etc/heartbeat/authkeys

    and paste in the output from the SHA1 key generation:

    auth 1
    1 sha1 1ff0cde062fc435a4b2f039c71e57271
  3. Configure the access permissions for root only:

    chmod 0600 /etc/ha.d/authkeys

    Or, combining all the above, the commands can be concatenated as follows:

    echo -e "auth 1\n1 sha1 $( dd if=/dev/urandom bs=512 count=1 2> /dev/null| openssl md5 )" > /etc/ha.d/authkeys && chmod 600 /etc/ha.d/authkeys
haresources Configuration File

Once the ha.cf and authkeys files are set up, the next step is to configure the haresources file. This file specifies the services for the cluster and who the default owner is. The haresources file is read when the server state changes from passive to active mode. In the example, we want the server node to handle the virtual IP address 192.168.0.200 when it is active.

  1. Create the haresources file by editing the file

    nano /etc/heartbeat/haresources
  2. Enter the content below and save the file. The configuration of this file will be completed later.

    osm1 192.168.0.200 cron osm-failover.sh

    where:

    OSM1: the cluster master node name

    192.168.0.200: the virtual IP address

    cron/crond: this service is started only when the server becomes active osm-failover.sh: the script that runs when the server becomes active

Copy the Heartbeat Configuration Files

The configuration on OSM2 must be the same as OSM1, so the files created on OSM1 can be copied to OSM2 as follows:

cd /etc/heartbeat
scp authkeys ha.cf haresources 10.0.0.2:/etc/heartbeat/

Pacemaker configuration

This section covers only Red Hat 7.x/CentOS 7.x.

Warning

This section is not for Ubuntu LTS.

Please follow Heartbeat Configuration for Ubuntu LTS.

As Heartbeat is deprecated, the configuration steps for Pacemaker are not the same.

All the following commands must be run as the root user.

Apache server status

This location needs to be defined for monitoring the apache service.

Create the /etc/httpd/conf.d/status.conf

nano /etc/httpd/conf.d/status.conf

Add the following content:

<Location /server-status>
SetHandler server-status
Order Deny,Allow
Deny from all
Allow from 127.0.0.1
</Location>

Stop Apache and the running service:

chkconfig httpd off
systemctl stop httpd.service

Starting PCS command

The pcs command line interface controls and configures Corosync and Pacemaker.

Start the pcs command on the OSM01 and OSM02 servers

systemctl enable pcsd.service
systemctl start pcsd.service

Corosync configuration

The default installation has created a user account named hacluster. The password must be defined

passwd hacluster

Authorize the OSM servers o the cluster

pcs cluster auth osm01 osm02

Create the cluster and the OSM servers as nodes

pcs cluster setup --name osmha osm01 osm02

osmha: name of the cluster

Pacemaker and Corosync services must be started

systemctl enable corosync.service
systemctl enable pacemaker.service

Disable some settings not useful in this case

pcs -f configuration property set stonith-enabled=false
pcs -f configuration property set no-quorum-policy=ignore

Adding resources to monitor

The VIP, Apache, and a sync script must be added as resources to monitor

Add the VIP as a resource in the cluster:

pcs -f configuration resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.0.200 cidr_netmask=24 op monitor interval=20s

Add the Apache service as a resource in the cluster

pcs -f configuration resource create WebServer ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://127.0.0.1/server-status" op monitor interval=20s

Now, a constraint must be defined between these two resources to be sure that the same node will be assigned to these resources.

pcs -f configuration constraint colocation add WebServer virtual_ip INFINITY

Heartbeat/Pacemaker Startup Script

Installing the Initialization Script for Heartbeat

The Heartbeat startup script will be started by the Heartbeat daemon on the active OSM server only. Copy the provided osm-failover.sh file to the directory /etc/init.d on both nodes.

These steps cover only Ubuntu LTS. Please follow Installing the Initialization Script for Pacemaker for Red Hat 7.x/CentOS 7.x.

On OSM1 and OSM2:

chmod 755 /etc/init.d/osm-failover.sh

Then update the startup conditions as follows:

  1. Ubuntu 14.04 and 16.04
    update-rc.d osm-failover.sh defaults
    update-rc.d osm-failover.sh disable

Installing the Initialization Script for Pacemaker

These steps cover only RHEL 7.x / CentOS 7.x. Please follow Installing the Initialization Script for Heartbeat for Ubuntu LTS.

The startup script will be started by Pacemaker on the active OSM only

  • Copy the provided osm-failover.sh file to /usr/sbin on both nodes.
  • Create a system service by creating the file.

    nano /etc/systemd/system/osm-failover.service
  • Add the following content:

    [Unit]
    Description=OSM Failover
    
    [Service]
    Type=forking
    ExecStart=/usr/sbin/osm-failover.sh start
    ExecStop=/usr/sbin/osm-failover.sh stop
    
    [Install]
    WantedBy=multi-user.target
  • Start the osm-failover service and enable it at startup

    systemctl start osm-failover.service
    systemctl enable osm-failover.service
  • Add the script as a resource in the cluster and add a constraint:

    sudo pcs -f configuration resource create OSM systemd:osm-failover op monitor interval=20s --force
    sudo pcs -f configuration constraint colocation add virtual_ip OSM INFINITY

Modifying the osm-failover.sh Script

The osm-failover.sh script must be modified to incorporate the settings of the installed environment.

IP source address rewrite

The OSM communicates on TCP port 1112 with the OAS and OFS servers. By default it uses the network interface that is started first. In our example it must be the VIP, which is eth0:0 instead of eth0. The required behavior can be enforced by using an Iptables rule. The rule to use is:

iptables -t nat -I POSTROUTING -d dest.Network -j SNAT -to Virtual-IP

When the Heartbeat daemon starts, it will execute the osm-failover.sh script which in turn implements the iptables rule and removes it when the daemon stops.

On OSM1 and OSM2:

  1. Edit the /etc/init.d/osm-failover.sh file or /usr/sbin/osm-failover.sh file.

  2. In the d_start() section, add/modify the line as below:

    d_start () {
        log_daemon_msg "Starting system $DEAMON_NAME Daemon"
        start-stop-daemon --background --name $DEAMON_NAME --start --quiet --chuid $DAEMONUSER --exec $DAEMON -- $DEAMON_OPT
        log_end_msg $?
        iptables -t nat -I POSTROUTING -d 192.168.0.0/24 -j SNAT --to 192.168.0.200
    }

    All packets routed to the network 192.168.0.0/24 are rewritten with the source IP 192.168.0.200 (the cluster VIP)

  3. In the d_stop() section, add/modify the line shown below:

    _stop () {
        log_daemon_msg "Stopping system $DEAMON_NAME Daemon"
        start-stop-daemon --name $DEAMON_NAME --stop --retry 5 --quiet --name $DEAMON_NAME
        log_end_msg $?
        iptables -t nat -F
    }

    When the Heartbeat daemon is stopped, the Iptables rule will be removed.

  4. When the Heartbeat daemon is running, check the IP tables rule is properly set:

    iptables -nL -v --line-numbers -t nat

Start the Heartbeat/Pacemaker daemon

Heartbeat

These steps cover only Ubuntu LTS. Please follow Pacemaker for Red Hat 7.x/CentOS 7.x.

On OSM1 and OSM2:

service heartbeat start

The log file may help with troubleshooting any Heartbeat issues:

/var/log/heartbeat.log

The server hosting the virtual IP address, which in normal operation mode is OSM1, should list the VIP address:

root@osm1:~# ifconfig
eth0    Link encap:Ethernet HWaddr 08:00:27:4a:b3:c7
        inet addr:192.168.0.201 Bcast:192.168.0.255 Mask:255.255.255.0
        inet6 addr: fe80::a00:27ff:fe4a:b3c7/64 Scope:Link
        UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
        RX packets:92348 errors:0 dropped:81 overruns:0 frame:0
        TX packets:10856 errors:0 dropped:0 overruns:0 carrier:0
        collisions:0 txqueuelen:1000
        RX bytes:19352940 (19.3 MB) TX bytes:2927565 (2.9 MB)

eth0:0  Link encap:Ethernet HWaddr 08:00:27:4a:b3:c7
        inet addr:192.168.0.200 Bcast:192.168.0.255 Mask:255.255.255.0
        UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Pacemaker

These steps cover only Red Hat 7.x/CentOS 7.x. Please follow Heartbeat for Ubuntu LTS.

Start the cluster:

sudo pcs cluster start --all

Then push the configuration to the active cluster

sudo pcs cluster cib-push configuration

You can verify the status of your cluster by using the following command

sudo pcs status

The result should look like this:

Cluster name: osmha
Last updated: Tue Nov 22 18:02:04 2016 Last change: Tue Nov 22 17:57:02 2016 by root via cibadmin on osm01
Stack: corosync
Current DC: osm02 (version 1.1.13-10.el7_2.4-44eb2dd) - partition WITHOUT quorum
2 nodes and 3 resources configured

Online: [ osm02 ]
OFFLINE: [ osm01 ]

Full list of resources:

 virtual_ip       (ocf::heartbeat:IPaddr2): Started osm02
 WebServer        (ocf::heartbeat:apache):  Started osm02
 OSM              (systemd:osmfailover):    Started osm02

PCSD Status:
  osm01: Online
  osm02: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

MySQL External Cluster Integration

This section explains how to integrate an OSM server farm with an enterprise MySQL clustered solution (third party solution).

It is possible though to use a free/open source MySQL cluster by implementing the solution from this excellent guide: MySQL HA (High Availability) Cluster Cookbook.

Bibliography