High Availability: Linux Active-Passive

Introduction

High Availability (HA) of Universal Data Mover Gateway means that it has been set up to be a redundant system; in addition to the components that are processing work, there are backup components available to continue processing through hardware or software failure.

This page describes a High Availability environment, with an Active-Passive setup.

High Availability System: Active-Passive

The following illustration is a typical, although simplified, Universal Data Mover Gateway Linux system in a High Availability environment.

In this environment, there are:

  • Two UDMG Server instances (MFT nodes)

  • One Shared Storage server machine. Resilience can be achieved with a cluster file system solution.

  • One Database Server machine. Resilience can be achieved with a cluster database solution.

The components in white are active and operating. The components in gray are available for operations but currently are inactive (passive).

The linux HAProxy and Keepalived utilities are installed to handle a virtual IP that is used by the client to reach the UDMG services.

Reference table for the sample server configuration:

IP

Hostname

Description

192.168.56.110

storage

NFS Server

192.168.56.100

vip

Virtual IP Addr

192.168.56.101

mft_1

MFT Node

192.168.56.102

mft_2

MFT Node

192.168.56.120

db

PostgreSQL Database

High Availability Configuration

To achieve High Availability for your Universal Data Mover Gateway system, you must configure the nodes and applications.

Configuring the NFS Server

Install the package required for the NFS server and related utilities

yum install nfs-utils

The package name could be different according to the distribution that you are using.

Add the directory that you want to share with the MFT Servers, for example /data:

# /etc/exports
#
# See exports(5) for a description.

# use exportfs -arv to reread
/data mft_1(rw,no_subtree_check,no_root_squash)
/data mft_2(rw,no_subtree_check,no_root_squash)

Edit your /etc/hosts with the names from the Reference Table:

#/etc/hosts
127.0.0.1 localhost.localdomain localhost storage
::1 localhost localhost.local domain


192.168.56.110 storage
192.168.56.100 VIP
192.168.56.101 mft_1
192.168.56.102 mft_2

Start the NFS Service

service start NFS-server

Show all exports filesystem:

# showmount -e
Export list for storage:
/data mft_1,mft_2

Configuring the first MFT Server

System Configuration

Configure the /etc/hosts file with the following entries:

#/etc/hosts
192.168.56.110 storage
192.168.56.100 vip
192.168.56.101 mft_1
192.168.56.102 mft_2
192.168.56.120 db

Enable this option in the /etc/sysctl.conf, this allows HAProxy to bind to the shared IP address, that will be defined with Keepalived:

net.ipv4.ip_nonlocal_bind = 1

Install the application packages for HAProxy, Keepalived, and the NFS client utilities:

mft:~# yum install haproxy keepalived nfs-utils

Configuring NFS Client

Check if you can mount the filesystem from the storage server

mft:~# showmount -e storage
Export list for storage:
/data mft_1,mft_2

Edit /etc/fstab to mount the remote filesystem under the server:

storage:/data /data nfs defaults,nolock,vers=3,soft 0 0

Using NFSv3 as NFSv4 does not support the nolock option, without this option the server could hang.

Mount all the filesystem:

mount -a

Verify that the filesystem is mounted and writable:

mft:~# df -h /data
Filesystem Size Used Available Use% Mounted on
storage:/data 15.6G 173.0M 14.6G 1% /data

mft:~# touch /data/test.1.txt
mft:~# stat /data/test.1.txt
File: /data/test.1.txt
Size: 0 Blocks: 0 IO Block: 262144 regular empty file
Device: 1bh/27d Inode: 260790 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2022-09-28 16:38:01.489853504 +0000
Modify: 2022-09-28 16:38:01.489853504 +0000
Change: 2022-09-28 16:38:01.489853504 +0000

Validate that the file exists on the storage server before continuing.

Configuring Keepalived

Now we are going to configure the keepalived service.

Create the following configuration under /etc/keepalived/keepalived.conf

vrrp_instance VI_1 {
        state MASTER      # This setting must be comment for Backup Mode
        # state BACKUP    # This setting will be uncommented for the Backup Node
        interface eth1
        virtual_router_id 51
        priority 255
        advert_int 1
        authentication {
              auth_type PASS
              auth_pass 12345
        }
        virtual_ipaddress {
              192.168.56.100/24 dev eth1 label eth1:1
        }
}

In our case, the VIP interface will be under the interface eth1

Before starting the service, check the list of network interfaces:

# ifconfig -a
eth0       Link encap:Ethernet HWaddr 08:00:27:1E:96:CF
           inet addr:10.0.2.15 Bcast:0.0.0.0 Mask:255.255.255.0
           inet6 addr: fe80::a00:27ff:fe1e:96cf/64 Scope:Link
           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
           RX packets:2 errors:0 dropped:0 overruns:0 frame:0
           TX packets:19 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:1000
           RX bytes:1180 (1.1 KiB) TX bytes:1970 (1.9 KiB)

eth1        Link encap:Ethernet HWaddr 08:00:27:E3:61:8C
            inet addr:192.168.56.101 Bcast:0.0.0.0 Mask:255.255.255.0
            inet6 addr: fe80::a00:27ff:fee3:618c/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
            RX packets:6322 errors:0 dropped:0 overruns:0 frame:0
            TX packets:19415 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:1181451 (1.1 MiB) TX bytes:1548756 (1.4 MiB)

lo.         Link encap:Local Loopback
            inet addr:127.0.0.1 Mask:255.0.0.0
            inet6 addr: ::1/128 Scope:Host
            UP LOOPBACK RUNNING MTU:65536 Metric:1
            RX packets:96944 errors:0 dropped:0 overruns:0 frame:0
            TX packets:96944 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:5988148 (5.7 MiB) TX bytes:5988148 (5.7 MiB)

Start the service

# service keepalived start

Now you should see a network alias configured as eth1:1

mft:~# ifconfig -a
eth1       Link encap:Ethernet HWaddr 08:00:27:E3:61:8C
           inet addr:192.168.56.101 Bcast:0.0.0.0 Mask:255.255.255.0
           inet6 addr: fe80::a00:27ff:fee3:618c/64 Scope:Link
           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
           RX packets:6769 errors:0 dropped:0 overruns:0 frame:0
           TX packets:20079 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:1000
           RX bytes:1232486 (1.1 MiB) TX bytes:1609240 (1.5 MiB)

eth1:1     Link encap:Ethernet HWaddr 08:00:27:E3:61:8C
           inet addr:192.168.56.100 Bcast:0.0.0.0 Mask:255.255.255.0
           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

This is the virtual IP configured by Keepalived

Configuring HAProxy

Create the following configuration file /etc/haproxy/haproxy.cfg

  • The placeholder "<SERVERNAME>" must be the server name of the configuration. Since we are going to install this under 2 machines, replace with mft_1 or mft_2 or localhost.

  • The placeholder "<SERVERPORT>" is the port of the UDMG server, 18080 with the default installation guideline.

  • HAProxy Status API is configured on port 8081

  • Two ranges of ports are forwarded for the FTP (3000-3010) and SFTP (4000-4010) inbound connections to the backend UDMG servers. This can be tuned according to the desired MFT configuration.

#/etc/haproxy/haproxy.cfg

# --------------------------------------------------------------------------- #
# Global
# --------------------------------------------------------------------------- #
global
  log 127.0.0.1 local0 info


# --------------------------------------------------------------------------- #
# Defaults Timeouts
# --------------------------------------------------------------------------- #
defaults
  retries 3
  option redispatch
  timeout client 30s
  timeout connect 4s
  timeout server 30s


# --------------------------------------------------------------------------- #
# Stats
# --------------------------------------------------------------------------- #
listen stats
 bind *:8081
 mode http
 log global
 maxconn 10
 stats enable
 stats hide-version
 stats refresh 30s
 stats show-node
 stats auth admin:password
 stats uri /status


# --------------------------------------------------------------------------- #
# FTP - mft Servers
# --------------------------------------------------------------------------- #
frontend ftp_service_front
 bind vip:4000-4010 transparent
 mode tcp
 use_backend ftp_service_backend


backend ftp_service_backend
 mode tcp
 stick-table type ip size 10k expire 300s
 stick on src
 server gw0 <SERVERNAME> check port <SERVERPORT>


# --------------------------------------------------------------------------- #
# SFTP - mft Servers
# --------------------------------------------------------------------------- #
frontend sftp_service_front
 bind vip:3000-3010 transparent
 mode tcp
 use_backend sftp_service_backend


backend sftp_service_backend
 mode tcp
 stick-table type ip size 10k expire 300s
 stick on src
 server gw0 <SERVERNAME> check port <SERVERPORT>


# --------------------------------------------------------------------------- #
# UDMG Server
# --------------------------------------------------------------------------- #
frontend gw_service_front
 bind vip:8080 transparent
 mode http
 default_backend gw_service_backend


backend gw_service_backend
 mode http
 balance roundrobin
 cookie SRVNAME insert
 server gw0 <SERVERNAME> check port <SERVERPORT> cookie S01 check


# --------------------------------------------------------------------------- #
# Nginx
# --------------------------------------------------------------------------- #
frontend nx_service_front
 bind vip:80 transparent
 mode http
 default_backend nx_service_backend

backend nx_service_backend
 mode http
 balance roundrobin
 cookie SRVNAME insert
 server gw0 <SERVERNAME> check port 80 cookie S01 check


# --------------------------------------------------------------------------- #
# END
# --------------------------------------------------------------------------- #
# EOF

Start the service:

# service haproxy start

Check if the process is running without any issue:

mft:~# ps ax | grep -i haproxy
2122 root 0:04 /usr/sbin/haproxy -D -p /var/run/haproxy.pid -f /etc/haproxy/haproxy.cfg

Verify that HAProxy is binding the ports that we are going to use under the MFT Waarp Gateway:

mft:~# netstat -tanlp | grep -i haproxy
tcp 0 0 0.0.0.0:8081 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:3000 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:3001 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:3002 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:3003 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:3004 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:3005 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:3006 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:3007 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:3008 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:3009 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:3010 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:4000 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:4001 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:4002 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:4003 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:4004 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:4005 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:4006 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:4007 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:4008 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:4009 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:4010 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:80 0.0.0.0:* LISTEN 2122/haproxy
tcp 0 0 192.168.56.100:8080 0.0.0.0:* LISTEN 2122/haproxy

Configuring the UDMG component

Follow the UDMG for Linux Installation guide.

UDMG Server

Under the server.ini configuration the follow settings must be updated:

[global]
; The name given to identify this gateway instance. If the the database is shared between multiple gateways, this name MUST be unique across these gateways.
GatewayName = mft

[paths]
; The root directory of the gateway. By default, it is the working directory of the process.
GatewayHome = /data/root

[admin]
; The address used by the admin interface.
Host = <SERVERNAME or SERVER IP>

[database]
; The path to the file containing the passphrase used to encrypt account passwords using AES
AESPassphrase = /data/passphrase.aes

The paths and database section must be point to the /data (NFS Storage).

UDMG Authentication Proxy

Under the configuration of the proxy server.toml

# Proxy Configuration
[proxy]
recover = true
cors = true
tracker = true
logger = true
port = "5000"
inet = "127.0.0.1"

[service.mft]
protocol = "http"
policy = "failover"
admins = ["admin"]

[[service.mft.targets]]
hostname = "<SERVERNAME or SERVER IP>"
port = 8080

Start the UDMG Server and UDMG Authentication Proxy.

NGINX for UDMG Admin UI

Configure the NGINX service to reach the IP that was configured before:

upstream mft_proxy {
    ip_hash;
    server <SERVERNAME or SERVER IP>:5000;
    keepalive 10;
}

server {
listen <SERVERNAME or SERVER IP>::80 default_server;

location / {
        try_files $uri $uri/ /index.html;
root "/var/www/localhost/htdocs";
}

    location /service/ {
      proxy_pass      http://mft_proxy/;
    }

# You may need this to prevent return 404 recursion.
location = /404.html {
internal;
}
}


Start the nginx service.

Configuring the second MFT Server

Repeat the above steps with the following difference regarding the state of the virtual IP.

Keepalived

Create the following configuration under /etc/keepalived/keepalived.conf

vrrp_instance VI_1 {
        # state MASTER      # This setting must be comment for Backup Mode
        state BACKUP    # This setting will be uncommented for the Backup Node
        interface eth1
        virtual_router_id 51
        priority 255
        advert_int 1
        authentication {
              auth_type PASS
              auth_pass 12345
        }
        virtual_ipaddress {
              192.168.56.100/24 dev eth1 label eth1:1
        }
}

Checking the failover

In order to see whether the configuration was successful, stop one of the the mft server, and validate that the VIP interface is moving to the other host.

References

This document references the following documents.

Name

Location

Setting up a Linux cluster with Keepalived

https://www.redhat.com/sysadmin/keepalived-basics