IBM DataPower Operations Dashboard v1.0.9.0

A newer version of this product documentation is available.

You are viewing an older version. View latest at IBM DPOD Documentation.

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 23 Next »


Overview

The cell environment (also referred as "federated environment") distribute DPOD's Store and DPOD's processing (using DPOD's agents) across different federated servers, in order to handle high loads of transactions rate (thousands of transactions per seconds).

The cell has two main components:

  • Cell Manager - a DPOD server (virtual or physical) that manages all Federated Cell Members (FCMs) as well as provides central DPOD services such as the Web Console, reports, alerts, etc.
  • Federated Cell Member (FCM) - a DPOD server (usually physical with local high speed storage) that includes Store data nodes and agents (Syslog and WS-M) for collecting, parsing and storing data. There could be one or more cell members per cell.

See the following chart:

The following procedure describes the process of establishing a DPOD cell environment.

Prerequisites


  1. The DPOD cell manager and cell FCM must be with the same version (minimum version is v1.0.9.0 )
  2. DPOD ce ll manager can be both "Appliance Mode" or "Non Appliance Mode" installation with "medium" architecture type as detailed in the Hardware and Software Requirements. The manager server can be both virtual or physical.
  3. DPOD cell member (FCM) should be "Non appliance Mode" installation with "High_20dv with High Load" architecture type as detailed in the Hardware and Software Requirements
  4. Each cluster component (manager / FCM ) should have two network interfaces :
    1. External interface - for DPOD users to access UI and for communication between DPOD and Monitored Gateways.
    2. Internal Interface - for internal DPOD components communication (should be 10GB Ethernet interface)
  5. Each installation will requires some different ports to be opened in the firewall - see table 1



table 1

From

To

Ports (Defaults)

Protocol

Usage

Cell Manager DPOD Appliance  

Each Monitored Device

5550 (TCP)

HTTP/S

Monitored Device administration management interface

Cell Manager DPOD Appliance  

DNS Server

TCP and UDP 53

DNS

DNS services. Static IP address may be used.

Cell Manager DPOD Appliance  

NTP Server

123 (UDP)

NTP

Time synchronization

Cell Manager DPOD Appliance  

Organizational mail server

25 (TCP)

SMTP

Send reports by email

Cell Manager DPOD Appliance  

LDAP

TCP 389 / 636 (SSL).

TCP 3268 / 3269 (SSL)

LDAP

Authentication & authorization. Can be over SSL

Cell Manager DPOD Appliance  Each of the Cell Member DPOD Appliance9300-9305 (TCP)ElasticsearchElasticsearch Communication (data + management)

NTP Server

Cell Manager DPOD Appliance  

123 (UDP)

NTP

Time synchronization

Each Monitored Device

Cell Manager DPOD Appliance  

60000-60003 (TCP)

TCP

SYSLOG Data

Each Monitored Device

Cell Manager DPOD Appliance  

60020-60023 (TCP)

HTTP/S

WS-M Payloads

FROM Users IPs         

Cell Manager DPOD Appliance  

443 (TCP)

HTTP/S

Access to with IBM DataPower Operations Dashboard Console

FROM Admins IPs         

Cell Manager DPOD Appliance  

22 (TCP)

TCP

SSH

Cell Member DPOD Appliance   Cell Manager DPOD Appliance  9200, 9300-9400ElasticsearchElasticsearch Communication (data + management)

Cell Member DPOD Appliance   

DNS Server

TCP and UDP 53

DNS

DNS services

Cell Member DPOD Appliance   

NTP Server

123 (UDP)

NTP

Time synchronization

NTP Server

Cell Member DPOD Appliance   

123 (UDP)

NTP

Time synchronization

Each Monitored Device

Cell Member DPOD Appliance   

60000-60003 (TCP)

TCP

SYSLOG Data

Each Monitored Device

Cell Member DPOD Appliance   

60020-60023 (TCP)

HTTP/S

WS-M Payloads

FROM Admins IPs         

Cell Member DPOD Appliance   

22 (TCP)

TCP

SSH


Manager Installation

DPOD cell manager can be both "Appliance Mode" or "Non Appliance Mode" installation with "medium" architecture type as detailed in the Hardware and Software Requirements. The manager server can be both virtual or physical. 


As described on the prerequisites section the cell topology requires two network interfaces . when installing the cell manager (the standard DPDO installation before federating to cell) user will be prompt to choose the ip address for the UI console, this should be the "External Interface"



Federate Cluster Member Installation

The following section will describe the installation process of a single Federated Cluster Member (FCM). User should repeat the procedure for every FCM installation.

Prerequisites

  • DPOD cell member (FCM) should be "Non Appliance Mode" installation with "High_20dv with High Load" architecture type as detailed in the Hardware and Software Requirements.
  • In addition to the "Non Applianc Mode" software requirements user should Install the following software packages (RPM) :
    • iptables
    • iptables-services
    • numactl
    • we also recommend installing some utilities packages useful for system maintenance and troubleshooting : telnet client, net-tools, iftop, tcpdump

Installation

DPOD installation

Install DPOD "Non Appliance Mode"  as described in the following installation procedure.


As described on the prerequisites section the cell topology requires two network interfaces . when installing the cell member (the standard DPDO installation before federating to cell) user will be prompt to choose the ip address for the UI console.
Although cell member does not have UI service user should choose the the "External Interface".


After the DPOD installation is complete user should execute the following operation system performance optimization script.

/app/scripts/tune-os-parameters.sh


User should reboot the server for the new performance optimization take effect.

Prepare Cell Member for Federation

Prepare mount points

The cell member is usually "bare metal" server with NVMe disks for maximizing server throughput.

Each of the Store's logical node (service) will be bound to a specific physical processor , disks and memory (using NUMA technology → Non-uniform memory access ).

The default cell member configuration assume 6 NVMe disks which will serve 3 Store data nodes (2 disks per node)

The following OS mount points should be configured by the user before federating the DPOD installation to "cell member".

We highly recommend the use of LVM (Logical volume Manager) to allow "flexible" storage for future storage needs .


note - colored table cells should be completed by the user based on his specific hardware.


Store Nodemount point pathDisk BayDisk SerialDisk PathCPU No
2/data2



2/data22



3/data3



3/data33



4/data4



4/data44




How to identify Disk OS path and Disk serial
  1. To identify which of the server's NVMe disk bay is bound to which of the CPU use the hardware manufacture documentation.
    Also, write down the disk's serial number by visually observing the disk.
  2. In order to identify the disk os path (example : /dev/nvme01n) and the disk serial the user should install the NVMe disk utility software provided by the hardware supplier. Example : for Intel based NVMe SSD disks install the "Intel® SSD Data Center Tool"  (isdct).
    Example output of the Intel SSD DC tool :

    isdct  show -intelssd
    
    - Intel SSD DC P4500 Series PHLE822101AN3PXXXX -
    
    Bootloader : 0133
    DevicePath : /dev/nvme0n1
    DeviceStatus : Healthy
    Firmware : QDV1LV45
    FirmwareUpdateAvailable : Please contact your Intel representative about firmware update for this drive.
    Index : 0
    ModelNumber : SSDPE2KE032T7L
    ProductFamily : Intel SSD DC P4500 Series
    SerialNumber : PHLE822101AN3PXXXX
    
    
  3. Use the disks bay number and the disk serial number (visually identified) and correlate with the output of the disk tool to identify the disk os path.


Examples for Mount Points and Disk Configurations


Store Nodemount point pathDisk BayDisk SerialDisk PathCPU No
2/data21PHLE822101AN3PXXXX/dev/nvme0n11
2/data222
/dev/nvme1n11
3/data34
/dev/nvme2n12
3/data335
/dev/nvme3n12
4/data412
/dev/nvme4n13
4/data4413
/dev/nvme5n13


Example for LVM Configuration
pvcreate -ff /dev/nvme0n1
vgcreate vg_data2 /dev/nvme0n1
lvcreate -l 100%FREE -n lv_data vg_data2
mkfs.xfs -f /dev/vg_data2/lv_data

pvcreate -ff /dev/nvme1n1
vgcreate vg_data22 /dev/nvme1n1
lvcreate -l 100%FREE -n lv_data vg_data22
mkfs.xfs /dev/vg_data22/lv_data


The /etc/fstab file :

/dev/vg_data2/lv_data    /data2                   xfs     defaults        0 0
/dev/vg_data22/lv_data   /data22                   xfs     defaults        0 0
/dev/vg_data3/lv_data    /data3                   xfs     defaults        0 0
/dev/vg_data33/lv_data   /data33                   xfs     defaults        0 0
/dev/vg_data4/lv_data    /data4                   xfs     defaults        0 0
/dev/vg_data44/lv_data   /data44                   xfs     defaults        0 0
Example for the final configuration for 3 Store's nodes

Not including other mount points needed as describe on DPOD Hardware and Software Requirements



# lsblk

NAME                MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1             259:0    0   2.9T  0 disk
└─vg_data2-lv_data  253:6    0   2.9T  0 lvm  /data2
nvme1n1             259:5    0   2.9T  0 disk
└─vg_data22-lv_data 253:3    0   2.9T  0 lvm  /data22
nvme2n1             259:1    0   2.9T  0 disk
└─vg_data3-lv_data  253:2    0   2.9T  0 lvm  /data3
nvme3n1             259:2    0   2.9T  0 disk
└─vg_data33-lv_data 253:5    0   2.9T  0 lvm  /data33
nvme4n1             259:4    0   2.9T  0 disk
└─vg_data44-lv_data 253:7    0   2.9T  0 lvm  /data44
nvme5n1             259:3    0   2.9T  0 disk
└─vg_data4-lv_data  253:8    0   2.9T  0 lvm  /data4

Prepare local OS based firewall

Most Linux based OS uses local firewall service (iptables / firewalld).

The OS for "non Appliance Mode" DPOD installation is provided by the user and its the user's responsibility to allow needed connectivity to and from the server.

User should make sure needed connectivity detailed on table 1 is allowed on the OS local firewall service.

When using DPOD "Appliance" mode installation for the cell manager, local OS based firewall service is handled by the cell member federation script.


Cell Member Federation

In order to federate and configure the cell member run the following script on in the cell manager once per cell member - e.g. if you want to add twocell members, run the script twice (in the cell manager), first time with the IP address of the first cell member, and second time with the IP address of the second cell manager.

impotent : the command should be executed using the os "root" user.


/app/scripts/configure_federated_cluster_manager.sh -a <internal IP address of the cell member> -g <external IP address of the cell member>
For example: /app/scripts/configure_federated_cluster_manager.sh -a 172.18.100.34 -g 172.17.100.33


Example for a successful execution - note that the script writes two log file, one in the cell manager and one in the cell member, the log file names are mentioned in the script's output.- TODO


Example for a failed execution, you will need to check the log file for further information.

in case of a failure, the script will try to rollback the configuration changes it made, so you can try to fix the problem and run it again. - TODO

Cell Member Federation Post Steps

NUMA configuration

DPOD cell member is using NUMA technology ( Non-uniform memory access).

Default cell manager configuration bound DPOD's agent to CPU 0 and the Store's nodes to CPU 1.
If the server has 4 CPUs user should edit node 2-3 service file and change the bind CPU to 2 and 3 respectively.

Identify NUMA configuration

To identify the amount of CPU installed on the server use the NUMA utility :

numactl -s

Example output for 4 CPU server :

policy: default
preferred node: current
physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 
cpubind: 0 1 2 3
nodebind: 0 1 2 3
membind: 0 1 2 3
Alter store's node 3-4

OPTIONAL - if the server has 4 CPU alter the Store's nodes service file to bound each node to different CPU.

The services files are located on directory /etc/init.d/ with the name  MonTier-es-raw-trans-Node-2 and MonTier-es-raw-trans-Node-3.

For node MonTier-es-raw-trans-Node-2
OLD VALUE : numa="/usr/bin/numactl --membind=1 --cpunodebind=1"
NEW VALUE : numa="/usr/bin/numactl --membind=2 --cpunodebind=2"

For node MonTier-es-raw-trans-Node-3
OLD VALUE : numa="/usr/bin/numactl --membind=1 --cpunodebind=1"
NEW VALUE : numa="/usr/bin/numactl --membind=3 --cpunodebind=3"


Cell Member Federation Verification


After a successful execution, you will be able to see the new cell member in the Manage → System → Nodes page,
For example, if we added two cell members:

Also, the new agents will be shown in the agents in the Manage → Internal Health → Agents page.
For example, if we have one cell manager with two agents and one cell members with four agents , the page will show six agents:

Configure The Monitored Device to Remote Collector's Agents

It is possible to configure entire monitored device to remote collector's agent or just a specific domain.

To configure monitored device / specific domain please follow instructions on Adding Monitored Devices




  • No labels