Session Manager Failover "Legacy" Guide¶
Legacy version
This document was previously named "Session Manager Failover Guide" and has been renamed Legacy since Inuvika now recommends a more up-to-date configuration.
This legacy version is still available in order for people who deployed this configuration to have support. If you are in this situation, we recommends you to contact your Inuvika contact in order to get update instructions in switching to the new recommended Session Manager Failover configuration.
If you do not have a deployed setup yet based on this document, Inuvika recommends to use the newer version of the Session Manager Failover Guide instead of this legacy version.
Preface¶
The purpose of this document is to provide a failover mechanism for the Inuvika OVD Session Manager role (OSM) and the additional roles that may be hosted on the OSM servers.
History¶
Version | Date | Comments |
---|---|---|
1.7 | 2020-06-25 | Rename the document to legacy |
1.6 | 2020-05-08 | Fixup incorrect configuration invalidatating the subscription keys |
1.5 | 2017-11-09 | Upgrades for 2.5, add better support for sessions sharing and database. |
1.4 | 2017-07-18 | Reformatting |
1.3 | 2016-11-23 | Updated document for CentOS 7 |
1.2 | 2016-07-11 | Updated document for CentOS 6 |
1.1 | 2015-09-14 | Updated document to new format and added clarifications. |
1.0 | 2015-01-23 | Published the first version of the document |
Introduction¶
The purpose of this document is to provide a description of an example failover mechanism that can be deployed for the Inuvika OVD Session Manager role (OSM) and for the additional roles that may be hosted on the OSM servers. The mechanism is based on a master/slave architecture. The example is based on using the Linux Heartbeat daemon and synchronizing the data used in the OVD Session Manager between the master and the slave nodes. Other mechanisms may be employed to achieve a similar result.
Keep your configuration up-to-date
If you have an existing deployed configuration based on this document, please refer to the Update from previous version of the document section.
Overview¶
The example described in this document is based on a simple OVD architecture to explain each step of the failover implementation. The architecture is depicted in the following figure:
The OSM1
server serves all OVD farm requests and tasks from its network
interface eth0
which hosts the internal and virtual IP address. In
the example architecture, these are 192.168.0.201
and 192.168.0.200
respectively. OSM1
is statically set as the master server in the
cluster.
For more convenience, the VIP can be registered in the enterprise DNS
server. In the example, the VIP is registered as
osm-vip.inuvika.demo
.
The Heartbeat daemon which runs on both OSM servers, broadcasts over a
dedicated network accessed using eth1
. When the Heartbeat daemon
detects that OSM1
has failed, it will switch the virtual IP address to
the OSM2 server. The resulting downtime will be no more than a few
seconds.
When the OSM1
server has been recovered and is available again, the
Heartbeat daemon will switch the virtual IP address back to OSM1
.
Although not covered in this document, it is possible to use an external MySQL clustered solution instead (See the MySQL External Cluster Integration section).
The instructions in this document cover Ubuntu 18.04 LTS (Bionic Beaver), Ubuntu 16.04 LTS (Xenial Xerus), RHEL 7, and CentOS 7. For other Linux distributions, there may be differences in the actual details of the implementation. However, the architectural approach will remain the same.
Pre-requisites¶
In order to implement a failover mechanism, the following pre-requisites should be in place.
- Good Linux administration skills. The solution is not trivial to implement and is best handled by an experienced administrator.
- Two Linux servers with only the default server installation (plus SSH server).
-
2 network interfaces on each server:
- eth0 attached to the enterprise network with a static IP address
-
eth1 on a dedicated Heartbeat network between both the OSM servers
Note
In Ubuntu 18.04 and 16.04, eth interface names have been deprecated. Please note the naming format your system is using and substitute it in for any references to eth throughout this document. You can use the ifconfig command to find your interface names. They will most likely begin with the 'en' prefix.
-
A network time server to synchronize all servers with it. This is mandatory for the solution to work successfully.
- A DNS server to register servers and VIP DNS names as FQDNs (fully qualified name)
- A Directory server (optional)
The installation of the required OSM and MySQL components will be described in this document. The Linux servers need have only a default installation to meet the pre-requisites.
A new subscription key, valid for both Session Managers, will be required later in the Obtain a new subscription key valid on both SM nodes section.
Resources required in the document are available in failover_scripts.zip
Network Configuration¶
This section describes the network configuration.
DNS Configuration¶
If a DNS server is not available, the HOSTS file must be properly set up for all hosts that participate in the OVD server farm.
Ubuntu LTS¶
In the example, the network configuration as defined in the
/etc/network/interfaces
file for each OSM is displayed below:
OSM1:
iface eth0 inet static
address 192.168.0.201
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameservers 192.168.0.199
dns-search inuvika.demo
OSM2:
iface eth0 inet static
address 192.168.0.202
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameservers 192.168.0.199
dns-search inuvika.demo
RHEL / CentOS 7.x¶
In the /etc/resolv.conf
file, add for each OSM
OSM1:
search inuvika.demo
nameserver 192.168.0.199
OSM2:
search inuvika.demo
nameserver 192.168.0.199
Checking DNS¶
Check that the DNS server is resolved correctly by using the nslookup
command. Similar testing should be conducted on each OSM machine.
For example the command below entered on OSM1
:
nslookup 192.168.0.199
should result in the following:
Server: 192.168.0.199
Address: 192.168.0.199#53
199.0.168.192.in-addr.arpa name = dc.inuvika.demo.
Hosts File Configuration¶
The HOSTS file must be set up correctly on each OSM server, even when a DNS server is available.
Edit the /etc/hosts
file on OSM1
and OSM2
nano /etc/hosts
The contents of the file based on this example should be as follows:
OSM1:
127.0.0.1 localhost
127.0.1.1 osm1
192.168.0.201 osm1.inuvika.demo osm1
192.168.0.202 osm2.inuvika.demo osm2
192.168.0.200 osm-vip.inuvika.demo osm-vip
OSM2:
127.0.0.1 localhost
127.0.1.1 osm2
192.168.0.202 osm2.inuvika.demo osm2
192.168.0.201 osm1.inuvika.demo osm1
192.168.0.200 osm-vip.inuvika.demo osm-vip
Configuring the Heartbeat Interface¶
Ubuntu 18.04 LTS (Bionic Beaver)¶
On Ubuntu Server 18.04 LTS, the network configuration is managed by netplan
.
-
Edit
/etc/netplan/90_config_eth1.yaml
-
Add the following content
-
OSM1
network: ethernets: eth1: addresses: - 10.0.0.1/24
-
OSM2
network: ethernets: eth1: addresses: - 10.0.0.2/24
-
-
Reload the system network configuration
netplan apply
Finally, run the ip addr command displays eth0
and eth1
.
Check that the details are correct before continuing.
Ubuntu 16.04 LTS (Xenial Xerus)¶
To configure the network settings to support the dedicated Heartbeat
network, add the settings for eth1
to the /etc/network/interfaces
file on each OSM.
OSM1:
auto eth1
iface eth1 inet static
address 10.0.0.1
netmask 255.255.255.0
OSM2:
auto eth1
iface eth1 inet static
address 10.0.0.2
netmask 255.255.255.0
Then restart the system in each case to apply the changes.
reboot
Upon reboot, running the ifconfig command displays eth0
and eth1
.
Check that the details are correct before continuing.
RHEL / CentOS 7.x¶
To configure the network settings to support the dedicated Heartbeat
network, add the settings to the
/etc/sysconfig/network-scripts/ifcfg-eth1
file on each OSM.
OSM1:
DEVICE=eth1
BOOTPROTO=manual
ONBOOT=yes
IPADDR=10.0.0.1
NETWORK=10.0.0.0
NETMASK=255.255.255.0
OSM2:
DEVICE=eth1
BOOTPROTO=manual
ONBOOT=yes
IPADDR=10.0.0.2
NETWORK=10.0.0.0
NETMASK=255.255.255.0
Then restart the system in each case to apply the changes.
reboot
Upon reboot, running the ifconfig command displays eth0
and eth1
.
Check that the details are correct before continuing.
MySQL installation and configuration¶
This section covers the MySQL installation on a dedicated server. If you prefer to use an external MySQL clustered solution, refer to the MySQL External Cluster Integration section and then move on to the Cluster node configuration section.
The MySQL installation is based on the official Inuvika OVD installation instructions.
Installation and Configuration¶
-
Install MySQL
-
Ubuntu LTS
apt update apt install mysql-server mysql-client
-
RHEL / CentOS 7.X
yum install mariadb mariadb-server chkconfig mariadb on service mariadb start mysqladmin -u root password 'mysql_root_password'
-
-
Create the OVD Database with the name
ovd
mysql -u root -p -e 'create database ovd'
-
Disable the firewall or add a rule to authorize the communication on the tcp port 3306.
-
Open a MySQL session as "root":
mysql -u root -p
-
Add the OSM servers to be able to connect remotly on the MySQL server
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'osm1' IDENTIFIED BY 'root'; mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'osm2' IDENTIFIED BY 'root'; mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'osm-vip' IDENTIFIED BY 'root'; mysql> FLUSH PRIVILEGES;
-
Restart the MySQL service for the changes to become active.
-
Ubuntu LTS
service mysql restart
-
RHEL / CentOS 7.X
systemctl restart mariadb
-
Cluster node configuration¶
The next step is to install the OVD components and establish the notification mechanism which is based on INotify.
Installing OVD Components¶
The OVD Session Manager (OSM), OVD Administration Console (OAC) and OVD Web Access (OWA) should be installed on OSM1 and OSM2. For instructions on installing these components, please follow the Installation and Configuration Guide.
Warning
The same Session Manager administration account and password must be created on both OSM1 and OSM2.
Obtain a new subscription key valid on both SM nodes¶
In order to continue, an Inuvika OVD Enterprise subscription key valid for both Session Managers is required.
This procedure will involve contacting your Inuvika Reseller Partner.
-
Retrieve the Session Manager IDs. Run the following instructions on both nodes:
-
Download the id retrieval tool
wget https://archive.inuvika.com/utils/ovd-session-manager-id_1.0.zip
-
Extract the archive
unzip ovd-session-manager-id_1.0.zip
-
Run the tool
./ovd-session-manager-id
-
The unique ID will be displayed
Inuvika OVD Session Manager ID: XXXXX-XXXXX-XXXXX-XXXXX-XXXXX
Please save this ID for the next step
-
-
Once you have both Session Manager IDs, contact your local Inuvika Reseller Partner. If you are not currently working with an Inuvika Reseller Partner contact your Inuvika representative directly, or submit the request form on https://www.inuvika.com/getakey.
Info
Provide the above IDs when requesting a new key.
Configuring sessions sharing¶
Using multiple OSM servers require to share sessions information between servers in order to prevent disconnections.
An additional module must be installed on each OSM server which will be in charge of sharing those sessions informations.
-
Ubuntu LTS
-
Install the modules
apt install php-memcache memcached
-
Edit the configuration file
nano /etc/memcached.conf
-
Change the line by replacing with the OSM ip.
-l 127.0.0.1
-
Restart the service
service memcached restart
-
Edit the Apache configuration file
-
For Ubuntu 18.04 LTS (Bionic Beaver)
nano /etc/php/7.2/apache2/php.ini
-
For Ubuntu 16.04 LTS (Xenial Xerus)
nano /etc/php/7.0/apache2/php.ini
-
-
And add the following
session.save_handler = memcache session.save_path = 'tcp://192.168.0.201:11211,tcp://192.168.0.202:11211'
-
Edit the memcache mode configuration file
-
For Ubuntu 18.04 LTS (Bionic Beaver)
nano /etc/php/7.2/mods-available/memcache.ini
-
For Ubuntu 16.04 LTS (Xenial Xerus)
nano /etc/php/7.0/mods-available/memcache.ini
-
-
Add at the end of the file
memcache.allow_failover=1 memcache.session_redundancy=4
-
Restart Apache
service apache2 restart
-
-
Centos / RHEL 7
-
Install the modules
yum install php-pecl-memcache memcached
-
Edit the configuration file
vi /etc/sysconfig/memcached
-
Change OPTIONS value with the OSM ip.
OPTIONS="X.X.X.X"
-
Change CACHESIZE by
CACHESIZE="1GB"
-
Restart the service
service memcached restart
-
Edit the Apache configuration file
vi /etc/httpd/conf.d/php.conf
-
And add the following
php_value session.save_handle "memcache" php_value session.save_handler "tcp://192.168.0.201:11211,tcp://192.168.0.202:11211"
-
Edit the memcache mode configuration file
vi /etc/php.d/memcache.ini
-
Add at the end of the file
memcache.allow_failover=1 memcache.session_redundancy=4
-
Restart Apache
service httpd restart
-
Inotify Installation and Configuration¶
The next step is to install the INotify software package on OSM1 and OMS2:
-
Ubuntu LTS
apt install liblinux-inotify2-perl
-
RHEL / CentOS 7.x
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm yum install perl-Linux-Inotify2
SSH Key Management¶
The SSH Key Management must be setup to allow communication between the
two nodes using rsync
without the system requesting SSH key
validation.
SSH Key Configuration¶
OSM1 server¶
-
Generate an RSA key pair on OSM1:
ssh-keygen -t rsa
Press enter to accept all defaults.
-
Create a .ssh folder on OSM2 remotely:
ssh root@10.0.0.2 mkdir -p .ssh
-
Transfer the OSM1 SSH public key to OSM2:
cat /root/.ssh/id_rsa.pub | ssh root@10.0.0.2 'cat >>.ssh/authorized_keys'
OSM2 server¶
-
Generate an RSA key pair on OSM2:
ssh-keygen -t rsa
Press enter to accept all defaults.
-
Create a .ssh folder on OSM1 remotely:
ssh root@10.0.0.1 mkdir -p .ssh
-
Transfer the OSM2 SSH public key to OSM1:
cat /root/.ssh/id_rsa.pub | ssh root@10.0.0.1 'cat >>.ssh/authorized_keys'
Verifying the SSH Configuration¶
Using SSH should not prompt the user to accept a key request if the configuration has been performed successfully. To verify that this is the case, on OSM1 run the following command:
ssh root@10.0.0.2
No password should be requested. If ok, enter "exit" to quit.
Then on OSM2
, run the following command:
ssh root@10.0.0.1
No password should be requested. If ok, enter "exit" to quit.
If the test passes on both nodes, then the SSH key management is correctly configured.
Warning
rsync uses the hostname stored within the generated key which in this case is 10.0.0.X
Inotify Script Installation and Configuration¶
The Perl script named osm-inotify.pl
that
is provided with this documentation must be copied into the /sbin
directory on both OSM servers. The script will be started by the
Heartbeat daemon and will detect if an OSM server has failed.
-
Once the file has been copied, make the file executable as follows:
chmod +x /sbin/osm-inotify.pl
-
Edit the script content so that it corresponds to the OSM configuration. On the
OSM1
server use the OSM2 Heartbeat IP address:# REMOTE SERVER IP $rip = "10.0.0.2";
On the OSM2 server use the
OSM1
Heartbeat address:# REMOTE SERVER IP $rip = "10.0.0.1";
Warning
Do not copy/paste the script content into the Linux server. It is recommended that a dedicated tool, such as WinSCP, is used on Windows to avoid corrupting the file.
OVD Administration Console¶
The next step is to edit the MySQL settings on OSM1
and OSM2
in the OVD
Administration Console.
-
Connect to the Administration Console on
OSM1
using a browser and enter the URLhttp://OSM1/ovd/admin/
-
Enter the MySQL username and password in the SQL Configuration page. The page should be displayed by default upon first login since the OVD database has not been configured at this stage. Change the Database host address with the one used for the dedicated MySQL server/cluster.
-
Perform the same operation on OSM2.
-
Both OSM servers must use the same SSL certificate for the failover to be successful. To achieve this, copy the SSL certificate from OSM1 to OSM2.
-
Ubuntu LTS
scp /etc/ssl/certs/ssl-cert-snakeoil.pem 10.0.0.2:/etc/ssl/certs/ scp /etc/ssl/private/ssl-cert-snakeoil.key 10.0.0.2:/etc/ssl/private
-
RHEL / CentOS 7.X
scp /etc/pki/tls/certs/localhost.crt * 10.0.0.2:/etc/pki/tls/certs/ scp /etc/pki/tls/private/localhost.key * 10.0.0.2:/etc/pki/tls/private/
-
The OVD configuration of the cluster is now complete.
Heartbeat/Pacemaker Configuration¶
The remaining step is to install and configure a cluster infrastructure (communication and membership) service. This allows clients to know about the presence and disappearance of peer processes (in the case of OVD the OVD Session Manager) on other machines.
Several options are available for Linux distributions but this document only covers one option per system: Hearbeat for Ubuntu LTS and Pacemaker / Corosync for EL7 (Red Hat 7.x, CentOS 7.x).
Please follow the Package Installation section and afterwards choose either Heartbeat Configuration or Pacemaker Configuration depending on your distribution.
Package Installation¶
The Heartbeat package must be installed on OSM1 and OSM2:
-
Ubuntu LTS
apt update apt install heartbeat
-
RHEL / CentOS 7.X
yum -y install corosync pacemaker pcs
Heartbeat Configuration¶
This section covers Ubuntu 18.04 LTS (Bionic Beaver) and Ubuntu 16.04 LTS (Xenial Xerus).
Warning
This section is not for RHEL 7.x and CentOS 7.x.
Please follow Pacemaker configuration for RHEL 7.x and CentOS 7.x.
There are 3 files to configure for the Heartbeat package. These files
will be created in the /etc/heartbeat
directory. This directory is a
symlink to /etc/ha.d
.
Configuration files¶
The files are created on OSM1 ONLY (the master node) and are:
-
authkeys
-
haresources
-
ha.cf
Configuration File¶
-
Edit the Heartbeat configuration file:
nano /etc/heartbeat/ha.cf
-
Add the following content:
autojoin none logfile /var/log/heartbeat.log logfacility daemon node osm1 osm2 keepalive 2 warntime 5 deadtime 15 bcast eth1 ping 192.168.0.1 auto_failback yes
node osm1 osm2
: the order is important as it is used to set the master node. In this case OSM1 will always be the master serverping 192.168.0.1
: the address of a ping gateway. This test pings the network gateway to check network availability. In this example the network gateway is 192.168.0.1.bcast eth1
: the Heartbeat daemon will broadcast through the dedicated interface which in this example is eth1auto_failback yes
: When the master node (OSM1 in this example) has been recovered after a failure, the Heartbeat daemon will revert the virtual IP address back to the master.
authkeys Configuration File¶
The authkeys file contains pre-shared secrets used for mutual cluster node authentication. It should only be readable by root and follows this format:
auth num
num algorithm secret
num
is a simple index, starting at 1. Usually, there will only be
one key in the authkeys
file.
algorithm
is the name of signature algorithm used. The options are
either md5 or sha1. It is recommended to not use a crc (a simple cyclic
redundancy check) which is not secure.
secret
is the actual authentication key.
The authkeys file can be created using a randomly generated secret. The following commands will achieve this task:
-
Generate SHA1 key:
dd if=/dev/urandom bs=512 count=1 2>/dev/null | openssl md5
The output should be similar to:
(stdin)= 1ff0cde062fc435a4b2f039c71e57271
-
Create the
authkeys
file by editing the file:nano /etc/heartbeat/authkeys
and paste in the output from the SHA1 key generation:
auth 1 1 sha1 1ff0cde062fc435a4b2f039c71e57271
-
Configure the access permissions for root only:
chmod 0600 /etc/ha.d/authkeys
Or, combining all the above, the commands can be concatenated as follows:
echo -e "auth 1\n1 sha1 $( dd if=/dev/urandom bs=512 count=1 2> /dev/null| openssl md5 )" > /etc/ha.d/authkeys && chmod 600 /etc/ha.d/authkeys
haresources Configuration File¶
Once the ha.cf
and authkeys
files are set up, the next step is
to configure the haresources
file. This file specifies the services
for the cluster and who the default owner is. The haresources
file
is read when the server state changes from passive to active mode. In
the example, we want the server node to handle the virtual IP address
192.168.0.200 when it is active.
-
Create the
haresources
file by editing the filenano /etc/heartbeat/haresources
-
Enter the content below and save the file. The configuration of this file will be completed later.
osm1 192.168.0.200 cron osm-failover.sh
where:
OSM1
: the cluster master node name192.168.0.200
: the virtual IP addresscron/crond
: this service is started only when the server becomes activeosm-failover.sh
: the script that runs when the server becomes active
Copy the Heartbeat Configuration Files¶
The configuration on OSM2 must be the same as OSM1, so the files created on OSM1 can be copied to OSM2 as follows:
cd /etc/heartbeat
scp authkeys ha.cf haresources 10.0.0.2:/etc/heartbeat/
Pacemaker configuration¶
This section covers only Red Hat 7.x/CentOS 7.x.
As Heartbeat is deprecated, the configuration steps for Pacemaker are not the same.
All the following commands must be run as the root user.
Apache server status¶
This location needs to be defined for monitoring the apache service.
Create the /etc/httpd/conf.d/status.conf
nano /etc/httpd/conf.d/status.conf
Add the following content:
<Location /server-status>
SetHandler server-status
Order Deny,Allow
Deny from all
Allow from 127.0.0.1
</Location>
Stop Apache and the running service:
chkconfig httpd off
systemctl stop httpd.service
Starting PCS command¶
The pcs command line interface controls and configures Corosync and Pacemaker.
Start the pcs command on the OSM01 and OSM02 servers
systemctl enable pcsd.service
systemctl start pcsd.service
Corosync configuration¶
The default installation has created a user account named hacluster
.
The password must be defined
passwd hacluster
Authorize the OSM servers to the cluster
pcs cluster auth osm01 osm02
Create the cluster and the OSM servers as nodes
pcs cluster setup --name osmha osm01 osm02
osmha
: name of the cluster
Pacemaker and Corosync services must be started
systemctl enable corosync.service
systemctl enable pacemaker.service
Disable some settings not useful in this case
pcs -f configuration property set stonith-enabled=false
pcs -f configuration property set no-quorum-policy=ignore
Adding resources to monitor¶
The VIP, Apache, and a sync script must be added as resources to monitor
Add the VIP as a resource in the cluster:
pcs -f configuration resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.0.200 cidr_netmask=24 op monitor interval=20s
Add the Apache service as a resource in the cluster
pcs -f configuration resource create WebServer ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://127.0.0.1/server-status" op monitor interval=20s
Now, a constraint must be defined between these two resources to be sure that the same node will be assigned to these resources.
pcs -f configuration constraint colocation add WebServer virtual_ip INFINITY
Heartbeat/Pacemaker Startup Script¶
Installing the Initialization Script for Heartbeat¶
The Heartbeat startup script will be started by the Heartbeat daemon on
the active OSM server only. Copy the provided osm-failover.sh
file
to the directory /etc/init.d
on both nodes.
These steps cover only Ubuntu LTS. Please follow Installing the Initialization Script for Pacemaker for Red Hat 7.x/CentOS 7.x.
On OSM1 and OSM2:
chmod 755 /etc/init.d/osm-failover.sh
Then update the startup conditions as follows:
- Ubuntu LTS
update-rc.d osm-failover.sh defaults update-rc.d osm-failover.sh disable
Installing the Initialization Script for Pacemaker¶
These steps cover only RHEL 7.x / CentOS 7.x. Please follow Installing the Initialization Script for Heartbeat for Ubuntu LTS.
The startup script will be started by Pacemaker on the active OSM only
- Copy the provided
osm-failover.sh
file to/usr/sbin
on both nodes. -
Create a system service by creating the file.
nano /etc/systemd/system/osm-failover.service
-
Add the following content:
[Unit] Description=OSM Failover [Service] Type=forking ExecStart=/usr/sbin/osm-failover.sh start ExecStop=/usr/sbin/osm-failover.sh stop [Install] WantedBy=multi-user.target
-
Start the
osm-failover
service and enable it at startupsystemctl start osm-failover.service systemctl enable osm-failover.service
-
Add the script as a resource in the cluster and add a constraint:
sudo pcs -f configuration resource create OSM systemd:osm-failover op monitor interval=20s --force sudo pcs -f configuration constraint colocation add virtual_ip OSM INFINITY
Modifying the osm-failover.sh Script¶
The osm-failover.sh script must be modified to incorporate the settings of the installed environment.
IP source address rewrite
The OSM communicates on TCP port 1112 with the OAS and OFS servers. By default it uses the network interface that is started first. In our example it must be the VIP, which is eth0:0 instead of eth0. The required behavior can be enforced by using an Iptables rule. The rule to use is:
iptables -t nat -I POSTROUTING -d dest.Network -j SNAT -to Virtual-IP
When the Heartbeat daemon starts, it will execute the osm-failover.sh script which in turn implements the iptables rule and removes it when the daemon stops.
On OSM1 and OSM2:
-
Edit the
/etc/init.d/osm-failover.sh
file or/usr/sbin/osm-failover.sh
file. -
In the
d_start()
section, add/modify the line as below:d_start () { log_daemon_msg "Starting system $DEAMON_NAME Daemon" start-stop-daemon --background --name $DEAMON_NAME --start --quiet --chuid $DAEMONUSER --exec $DAEMON -- $DEAMON_OPT log_end_msg $? iptables -t nat -I POSTROUTING -d 192.168.0.0/24 -j SNAT --to 192.168.0.200 }
All packets routed to the network
192.168.0.0/24
are rewritten with the source IP192.168.0.200
(the cluster VIP) -
In the
d_stop()
section, add/modify the line shown below:_stop () { log_daemon_msg "Stopping system $DEAMON_NAME Daemon" start-stop-daemon --name $DEAMON_NAME --stop --retry 5 --quiet --name $DEAMON_NAME log_end_msg $? iptables -t nat -F }
When the Heartbeat daemon is stopped, the Iptables rule will be removed.
-
When the Heartbeat daemon is running, check the IP tables rule is properly set:
iptables -nL -v --line-numbers -t nat
Start the Heartbeat/Pacemaker daemon¶
Heartbeat¶
These steps cover only Ubuntu LTS. Please follow Pacemaker for Red Hat 7.x/CentOS 7.x.
On OSM1 and OSM2:
service heartbeat start
The log file may help with troubleshooting any Heartbeat issues:
/var/log/heartbeat.log
The server hosting the virtual IP address, which in normal operation mode is OSM1, should list the VIP address:
root@osm1:~# ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:4a:b3:c7
inet addr:192.168.0.201 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe4a:b3c7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:92348 errors:0 dropped:81 overruns:0 frame:0
TX packets:10856 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:19352940 (19.3 MB) TX bytes:2927565 (2.9 MB)
eth0:0 Link encap:Ethernet HWaddr 08:00:27:4a:b3:c7
inet addr:192.168.0.200 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Pacemaker¶
These steps cover only Red Hat 7.x/CentOS 7.x. Please follow Heartbeat for Ubuntu LTS.
Start the cluster:
sudo pcs cluster start --all
Then push the configuration to the active cluster
sudo pcs cluster cib-push configuration
You can verify the status of your cluster by using the following command
sudo pcs status
The result should look like this:
Cluster name: osmha
Last updated: Tue Nov 22 18:02:04 2016 Last change: Tue Nov 22 17:57:02 2016 by root via cibadmin on osm01
Stack: corosync
Current DC: osm02 (version 1.1.13-10.el7_2.4-44eb2dd) - partition WITHOUT quorum
2 nodes and 3 resources configured
Online: [ osm02 ]
OFFLINE: [ osm01 ]
Full list of resources:
virtual_ip (ocf::heartbeat:IPaddr2): Started osm02
WebServer (ocf::heartbeat:apache): Started osm02
OSM (systemd:osmfailover): Started osm02
PCSD Status:
osm01: Online
osm02: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
MySQL External Cluster Integration¶
This section explains how to integrate an OSM server farm with an enterprise MySQL clustered solution (third party solution).
It is possible though to use a free/open source MySQL cluster by implementing the solution from this excellent guide: MySQL HA (High Availability) Cluster Cookbook.
Update from previous version of the document¶
If you have an existing configuration based on a previous version of this document, this section will guide you though updating so your configuration stays up to date according to Inuvika's recommended configuration.
Update from version 1.5 of the document¶
Version 1.5 of the document contained an incorrect configuration that invalidated the subscription keys. This section will help you to fix the issue and install a new key.
Warning
This is a critical fix that must be carried out cautiously on Production Environments. The Session Manager can be temporarly put out of service for up to a few seconds on average.
It is recommended to carry out this process during a time when your farm is not being used heavily (ex: nights or weekends). You may even prefer to put the farm into maintenance mode.
-
Apply the following instructions on both Session Manager nodes:
-
Deploy the new version of the
osm-inotify.pl
script into/sbin/
file (overwriting the previously installed version) -
Make sure the script is executable
chmod +x /sbin/osm-inotify.pl
-
-
Then run the following on the master node only:
-
For EL7
pcs resource disable OSM; pcs resource enable OSM
-
For Ubuntu
service osm-failover.sh restart
-
-
Finally follow the Obtain a new subscription key valid on both SM nodes section and install the new key
Once completed, run a quick checkup of your OVD farm to ensure everything is functioning properly (ex: check the OVD Admin Console, start an OVD session, etc...).
Bibliography¶
- Linux HA. (2015, 09 09). Retrieved from Linux HA: http://www.linux-ha.org/doc/users-guide/users-guide.html
- Linux HA. (2015, 09 09). Retrieved from Linux HA: http://www.linux-ha.org/wiki/Haresources