Overview

Federated architecture best fits customers that execute high load (thousands of transactions per second or more) in their gateways.
The cell environment implements the federated architecture by distributing DPOD's Store and DPOD's agents across different federated servers.

The cell environment has two main components:

The following diagram describes the cell environment:

Prerequisites

  1. Before installing a cell environment, make sure to complete the sizing process with IBM Support Team to get recommendations for the hardware and architecture suitable for your requirements.
  2. DPOD cell manager and federated cell members must be of the same version (minimum version is 1.0.8.6).
  3. DPOD cell manager is usually virtual and can be installed in both Appliance Mode or Non-Appliance Mode with Medium Load architecture type, as detailed in the Hardware and Software Requirements.
  4. DPOD federated cell members (FCMs) can be one of the following:
    1. Physical servers installed in Non-appliance Mode (based on RHEL) with High_20dv architecture type, as detailed in the Hardware and Software Requirements.
      Physical servers are used when the cell is required to process high transactions per second (TPS) load.
    2. Virtual servers installed in Non-appliance Mode with Medium architecture type or higher, as detailed in the Hardware and Software Requirements.
      Virtual servers are used when the cell is required to process moderate transactions per second (TPS) load, or when the cell is part of a non-production environment where the production cell uses physical servers (to keep environments architecture similar).
  5. A cell environment must have only physical or only virtual cell members (cannot mix physical and virtual cell members in the same cell).
  6. All DPOD federated cell members must have the same resources, such as CPUs, RAM, disk type and storage capacity.
  7. Physical federated cell members with 4 CPU sockets and NVMe disks require special disks and mount points configuration to ensure performance. See Configuring Cell Members with 4 CPU Sockets and NVMe Disks.
  8. Each cell component (manager / FCM) should have two network interfaces:
    1. External interface - for DPOD users to access the Web Console (on the cell manager) and for communication between DPOD and Monitored Gateways (on both the cell manager and the members).
    2. Internal interface - for internal DPOD components inter-communication (should be a 10Gb Ethernet interface).
  9. Network ports should be opened in the network firewall as detailed below:

From

To

Ports (Defaults)

Protocol

Usage

DPOD Cell Manager

Each Monitored Device

5550 (TCP)

HTTP/S

Monitored device administration management interface.
If the SOMA port is different than 5550 - the port should be changed accordingly.

DPOD Cell Manager

DNS Server

TCP and UDP 53

DNS

DNS services. Static IP address may be used.

DPOD Cell Manager

NTP Server

123 (UDP)

NTP

Time synchronization

DPOD Cell Manager

Organizational mail server

25 (TCP)

SMTP

Send reports by email

DPOD Cell Manager

LDAP

TCP 389 / 636 (SSL).

TCP 3268 / 3269 (SSL)

LDAP

Authentication & authorization. Can be over SSL.

DPOD Cell ManagerEach DPOD Federated Cell Member443 (TCP)HTTP/SCommunication (data + management)
DPOD Cell ManagerEach DPOD Federated Cell Member22 (TCP)TCPSSH root access is needed for the cell installation and for admin operations from time to time.
DPOD Cell ManagerEach DPOD Federated Cell Member9300-9305 (TCP)ElasticSearchElasticSearch Communication (data + management)
DPOD Cell ManagerEach DPOD Federated Cell Member60000-60003 (TCP)TCPSyslog keep-alive data
DPOD Cell ManagerEach DPOD Federated Cell Member60020-60023 (TCP)TCPHTTP/S WS-M keep-alive data

NTP Server

DPOD Cell Manager

123 (UDP)

NTP

Time synchronization

Users IPs         

DPOD Cell Manager

443 (TCP)

HTTP/S

DPOD's Web Console

Admins IPs         

DPOD Cell Manager

22 (TCP)

TCP

SSH

Each DPOD Federated Cell MemberDPOD Cell Manager443 (TCP)HTTP/SCommunication (data + management)
Each DPOD Federated Cell MemberDPOD Cell Manager9200, 9300-9400ElasticSearchElasticSearch Communication (data + management)

Each DPOD Federated Cell Member

DNS Server

TCP and UDP 53

DNS

DNS services

Each DPOD Federated Cell Member

NTP Server

123 (UDP)

NTP

Time synchronization

Each Monitored Device

Each DPOD Federated Cell Member

60000-60003 (TCP)

TCP

SYSLOG Data

Each Monitored Device

Each DPOD Federated Cell Member

60020-60023 (TCP)

HTTP/S

WS-M Payloads

NTP Server

Each DPOD Federated Cell Member

123 (UDP)

NTP

Time synchronization

Admins IPs         

Each DPOD Federated Cell Member

22 (TCP)

TCP

SSH


Cell Manager Installation

Prerequisites

DPOD Installation

Install DPOD:

After DPOD installation is complete, execute the following operating system performance optimization commands and reboot the server:

sed -i 's/^NODE_HEAP_SIZE=.*/NODE_HEAP_SIZE="2G"/g' /etc/init.d/MonTier-es-raw-trans-Node-1
/app/scripts/tune-os-parameters.sh
reboot


Federated Cell Member Installation

The following section describes the installation process of a single Federated Cell Member (FCM). Please repeat the procedure for every FCM installation.

Prerequisites

DPOD Installation

Install DPOD:

After DPOD installation is complete, execute the following operating system performance optimization commands and reboot the server:

/app/scripts/tune-os-parameters.sh
reboot

Configuring Mount Points of Cell Member

List of Mount Points

The cell member server should contain disks according to the recommendations made in the sizing process with IBM Support Team. Each disk should be mounted to a different mount point. The required mount points are:

Mapping Mount Points to Disks

Map the mount points to disks:

Mount PointsDisks
/data2, /data22 and /data222 (if exists)Disks connected to NUMA node 1
/data3, /data33 and /data333 (if exists)Disks connected to NUMA node 2
/data4, /data44 and /data444 (if exists)Disks connected to NUMA node 3

Creating Mount Points

Use LVM (Logical Volume Manager) to create the mount points. You may use the following commands as an example of how to configure a single mount point (/data2 on disk nvme0n1 in this case):

pvcreate -ff /dev/nvme0n1
vgcreate vg_data2 /dev/nvme0n1
lvcreate -l 100%FREE -n lv_data vg_data2
mkfs.xfs -f /dev/vg_data2/lv_data
echo "/dev/vg_data2/lv_data    /data2                   xfs     defaults        0 0" >> /etc/fstab
mkdir -p /data2
mount /data2

Inspecting final configuration

Execute the following command and verify mount points (this example is for 6 disks per cell member and does not include other mount points that should exist):

lsblk

Expected output:
NAME                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1               259:2    0   2.9T  0 disk 
└─vg_data2-lv_data    253:0    0   2.9T  0 lvm  /data2
nvme1n1               259:5    0   2.9T  0 disk 
└─vg_data22-lv_data   253:11   0   2.9T  0 lvm  /data22
nvme2n1               259:1    0   2.9T  0 disk 
└─vg_data3-lv_data    253:9    0   2.9T  0 lvm  /data3
nvme3n1               259:0    0   2.9T  0 disk 
└─vg_data33-lv_data   253:10   0   2.9T  0 lvm  /data33
nvme4n1               259:3    0   2.9T  0 disk 
└─vg_data44-lv_data   253:8    0   2.9T  0 lvm  /data44
nvme5n1               259:4    0   2.9T  0 disk 
└─vg_data4-lv_data    253:7    0   2.9T  0 lvm  /data4

Cell Member Federation

In order to federate and configure the cell member, run the following script in the cell manager once per cell member.

Important: The script should be executed using the OS root user, and also requires remote root access over SSH from the cell manager to the cell member.

Execute the script suitable for your environment:

The script writes two log files - one in the cell manager and one in the cell member. The log file names are mentioned in the script's output.
In case of a failure, the script will try to rollback the configuration changes it made, so the problem can be fixed before rerunning it again.

Updating Configuration for Physical Federated Cell Members with 4 CPU Sockets and NVMe Disks

Note: If the cell member server does not have 4 CPU sockets or does not have NVMe disks - skip this step.

To update the service files, execute the following commands:

sed -i 's#/usr/bin/numactl --membind=1 --cpunodebind=1#/usr/bin/numactl --membind=2 --cpunodebind=2#g' /etc/init.d/MonTier-es-raw-trans-Node-3
sed -i 's#/usr/bin/numactl --membind=1 --cpunodebind=1#/usr/bin/numactl --membind=3 --cpunodebind=3#g' /etc/init.d/MonTier-es-raw-trans-Node-4

To verify the NUMA configuration for all services, execute the following command:

grep numactl /etc/init.d/*

Updating Configuration for Federated Cell Members with at least 384GB RAM

Note: If the cell member server has less than 384GB RAM - skip this step.

To update the service files, execute the following command:

sed -i 's/^NODE_HEAP_SIZE=.*/NODE_HEAP_SIZE="64G"/g' /etc/init.d/MonTier-es-raw-trans-Node-2
sed -i 's/^NODE_HEAP_SIZE=.*/NODE_HEAP_SIZE="64G"/g' /etc/init.d/MonTier-es-raw-trans-Node-3
sed -i 's/^NODE_HEAP_SIZE=.*/NODE_HEAP_SIZE="64G"/g' /etc/init.d/MonTier-es-raw-trans-Node-4

Cell Member Federation Verification

After a successful federation, you will be able to see the new federated cell member in the Manage → System → Nodes page. For example:

Also, the new agents will be shown in the agents list in the Manage → Internal Health → Agents page:

Configure the Monitored Gateways to Use the Federated Cell Member Agents

Configure the monitored gateways to use the federated cells agents. Please follow instructions on Adding Monitored Devices.