Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. DPOD cell manager and federated cell members must be with the same version (minimum version is v1.0.9.0).
  2. DPOD cell manager can be installed in both Appliance Mode or Non-Appliance Mode with Medium Load architecture type, as detailed in the Hardware and Software Requirements. The manager server can be both virtual or physical.
  3. DPOD federated cell member (FCM) should be installed in Non-appliance Mode with High_20dv with High Load architecture type, as detailed in the Hardware and Software Requirements.
  4. Each cell component (manager / FCM) should have two network interfaces:
    1. External interface - for DPOD users to access the Web Console and for communication between DPOD and Monitored Gateways.
    2. Internal interface - for internal DPOD components inter-communication (should be a 10Gb Ethernet interface).
  5. Network ports should be opened in the network firewall as detailed below:

Anchor
Network Ports
Network Ports

From

To

Ports (Defaults)

Protocol

Usage

DPOD Cell Manager

Each Monitored Device

5550 (TCP)

HTTP/S

Monitored device administration management interface

DPOD Cell Manager

DNS Server

TCP and UDP 53

DNS

DNS services. Static IP address may be used.

DPOD Cell Manager

NTP Server

123 (UDP)

NTP

Time synchronization

DPOD Cell Manager

Organizational mail server

25 (TCP)

SMTP

Send reports by email

DPOD Cell Manager

LDAP

TCP 389 / 636 (SSL).

TCP 3268 / 3269 (SSL)

LDAP

Authentication & authorization. Can be over SSL.

DPOD Cell ManagerEach DPOD Federated Cell Member9300-9305 (TCP)ElasticSearchElasticSearch Communication (data + management)

NTP Server

DPOD Cell Manager

123 (UDP)

NTP

Time synchronization

Each Monitored Device

DPOD Cell Manager

60000-60003 (TCP)

TCP

SYSLOG Data

Each Monitored Device

DPOD Cell Manager

60020-60023 (TCP)

HTTP/S

WS-M Payloads

Users IPs         

DPOD Cell Manager

443 (TCP)

HTTP/S

IBM DataPower Operations Dashboard Web Console

Admins IPs         

DPOD Cell Manager

22 (TCP)

TCP

SSH

Each DPOD Federated Cell MemberDPOD Cell Manager9200, 9300-9400ElasticSearchElasticSearch Communication (data + management)

Each DPOD Federated Cell Member

DNS Server

TCP and UDP 53

DNS

DNS services

Each DPOD Federated Cell Member

NTP Server

123 (UDP)

NTP

Time synchronization

NTP Server

Each DPOD Federated Cell Member

123 (UDP)

NTP

Time synchronization

Each Monitored Device

Each DPOD Federated Cell Member

60000-60003 (TCP)

TCP

SYSLOG Data

Each Monitored Device

Each DPOD Federated Cell Member

60020-60023 (TCP)

HTTP/S

WS-M Payloads

Admins IPs         

Each DPOD Federated Cell Member

22 (TCP)

TCP

SSH

...

  • DPOD federated cell member (FCM) should be installed in Non-appliance Mode with High_20dv with High Load architecture type, as detailed in the Hardware and Software Requirements.
  • The following software packages (RPMs) should be installed: iptables, iptables-services, numactl
  • The following software packages (RPMs) are recommended for system maintenance and troubleshooting, but are not required: telnet client, net-tools, iftop, tcpdump

Installation

DPOD

...

Installation

...

Note

User should reboot the server for the new performance optimization to take effect.

...

Preparing Cell Member for Federation

...

Preparing Mount Points

The cell member is usually a "bare metal" server with NVMe disks for maximizing server throughput.

Each of the Store's logical node (service) will be bound to a specific physical processor, disks and memory using NUMA (using NUMA technology → Non-uniform memory access )Uniform Memory Access) technology.

The default cell member configuration assume assumes 6 NVMe disks which will serve 3 Store data logical nodes (2 disks per node).

The following OS mount points should be configured by the user before federating the DPOD installation cell member to "the cell member"environment.

Note

We highly recommend the use of LVM (Logical volume Volume Manager) to allow " flexible " storage for future storage needs.


note - colored table cells Empty cells in the following table should be completed by the user, based on his their specific hardware.:

Store Node
mount point path
Mount Point PathDisk BayDisk SerialDisk OS PathCPU
No
#
2/data2



2/data22



3/data3



3/data33



4/data4



4/data44



How to

...

Identify Disk OS

...

Path and Disk

...

Serial
  1. To identify which of the server's NVMe disk bay bays is bound to which of the CPU CPUs, use the hardware manufacture documentation.
    Also, write down the disk's serial number by visually observing the disk.
  2. In order to identify the disk os OS path (example e.g.: /dev/nvme01n) and the disk serial the user should , install the NVMe disk utility software provided by the hardware supplier. Example For example: for Intel-based NVMe SSD disks, install the "Intel® SSD Data Center Tool"   (isdct).
    Example output of the Intel SSD DC tool:

    Code Block
    themeRDark
    isdct  show -intelssd
    
    - Intel SSD DC P4500 Series PHLE822101AN3PXXXX -
    
    Bootloader : 0133
    DevicePath : /dev/nvme0n1
    DeviceStatus : Healthy
    Firmware : QDV1LV45
    FirmwareUpdateAvailable : Please contact your Intel representative about firmware update for this drive.
    Index : 0
    ModelNumber : SSDPE2KE032T7L
    ProductFamily : Intel SSD DC P4500 Series
    SerialNumber : PHLE822101AN3PXXXX
    
    


  3. Use the disks disk bay number and the disk serial number (visually identified) and correlate them with the output of the disk tool to identify the disk os OS path.

...

Example for Mount Points and Disk Configurations
Store Node
mount point path
Mount Point PathDisk BayDisk SerialDisk OS PathCPU
No
#
2/data21PHLE822101AN3PXXXX/dev/nvme0n11
2/data222
/dev/nvme1n11
3/data34
/dev/nvme2n12
3/data335
/dev/nvme3n12
4/data412
/dev/nvme4n13
4/data4413
/dev/nvme5n13
Example

...

for LVM Configuration
Code Block
themeRDark
pvcreate -ff /dev/nvme0n1
vgcreate vg_data2 /dev/nvme0n1
lvcreate -l 100%FREE -n lv_data vg_data2
mkfs.xfs -f /dev/vg_data2/lv_data

pvcreate -ff /dev/nvme1n1
vgcreate vg_data22 /dev/nvme1n1
lvcreate -l 100%FREE -n lv_data vg_data22
mkfs.xfs /dev/vg_data22/lv_data


The /etc/fstab file:

Code Block
themeRDark
/dev/vg_data2/lv_data    /data2                   xfs     defaults        0 0
/dev/vg_data22/lv_data   /data22                   xfs     defaults        0 0
/dev/vg_data3/lv_data    /data3                   xfs     defaults        0 0
/dev/vg_data33/lv_data   /data33                   xfs     defaults        0 0
/dev/vg_data4/lv_data    /data4                   xfs     defaults        0 0
/dev/vg_data44/lv_data   /data44                   xfs     defaults        0 0

...

Example for the Final Configuration for 3 Store's nodes
Note

Not including This example does not include other mount points needed, as describe on DPOD in Hardware and Software Requirements.


Code Block
themeRDark
# lsblk

NAME                MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1             259:0    0   2.9T  0 disk
└─vg_data2-lv_data  253:6    0   2.9T  0 lvm  /data2
nvme1n1             259:5    0   2.9T  0 disk
└─vg_data22-lv_data 253:3    0   2.9T  0 lvm  /data22
nvme2n1             259:1    0   2.9T  0 disk
└─vg_data3-lv_data  253:2    0   2.9T  0 lvm  /data3
nvme3n1             259:2    0   2.9T  0 disk
└─vg_data33-lv_data 253:5    0   2.9T  0 lvm  /data33
nvme4n1             259:4    0   2.9T  0 disk
└─vg_data44-lv_data 253:7    0   2.9T  0 lvm  /data44
nvme5n1             259:3    0   2.9T  0 disk
└─vg_data4-lv_data  253:8    0   2.9T  0 lvm  /data4

...

Preparing Local OS Based Firewall

Most Linux-based OS uses a local firewall service (e.g.: iptables / firewalld).

The OS for "non Since the OS of the Non-Appliance Mode " DPOD installation is provided by the user and its , it is under the user's responsibility to allow needed connectivity to and from the server.

User should make sure needed connectivity detailed on table 1 in Network Ports Table is allowed on the OS local firewall service.

Note

When using DPOD " Appliance " mode installation for the cell manager, local OS based firewall service is handled by the cell member federation script.

...