Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Before installing a cell environment, make sure to complete the sizing process with IBM Support Team to get recommendations for the hardware and architecture suitable for your requirements.
  2. DPOD cell manager and federated cell members must be of the same version (minimum version is 1.0.8.6).
  3. DPOD cell manager is usually virtual and can be installed in both Appliance Mode or Non-Appliance Mode with Medium Load architecture type, as detailed in the Hardware and Software Requirements.
  4. DPOD federated cell members (FCMs) can be one of the following:
    1. Physical servers installed in Non-appliance Mode with High_20dv architecture type, as detailed in the Hardware and Software Requirements.
      Physical servers are used when the cell is required to process high levels of transactions per second (TPS) load.
    2. Virtual servers installed in Non-appliance Mode with Medium architecture type or higher, as detailed in the Hardware and Software Requirements.
      Virtual servers are used when the cell is required to process moderate levels of moderate transactions per second (TPS) load, or when the cell is part of a non-production environment where the production cell uses physical servers (to keep environments architecture similar).
  5. A cell environment must have only physical or only virtual cell members (cannot mix physical and virtual cell members in the same cell).
  6. All DPOD federated cell members must have the same resources, such as CPUs, RAM, disk type and storage capacity.
  7. Physical federated cell members with 4 CPU sockets and NVMe disks require special disks and mount points configuration to ensure performance. See Configuring Cell Members with 4 CPU Sockets and NVMe Disks.
  8. Each cell component (manager / FCM) should have two network interfaces:
    1. External interface - for DPOD users to access the Web Console (on the cell manager) and for communication between DPOD and Monitored Gateways (on both the cell manager and the members).
    2. Internal interface - for internal DPOD components inter-communication (should be a 10Gb Ethernet interface).
  9. Network ports should be opened in the network firewall as detailed below:

...

  • Make sure to meet the prerequisites listed at the top of this page.
  • Install the following software package (RPM) on the cell member: bc, numactlpciutils, nvme-cli
  • Physical federated cell members with 4 CPU sockets and NVMe disks require special disks and mount points configuration to ensure performance. See Configuring Cell Members with 4 CPU Sockets and NVMe Disks.
  • Most Linux-based OS use a local firewall service (e.g.: iptables / firewalld). Since the OS of the Non-Appliance Mode DPOD installation is provided by the user, it is under the user's responsibility to allow needed connectivity to and from the server.
    Configure the local firewall service to allow connectivity as described in the prerequisites section at the top of this page.

  • The following software packages (RPMs) are recommended for system maintenance and troubleshooting, but are not required: telnet client, net-tools, iftop, tcpdump, pciutils, nvme-cli

DPOD Installation

Install DPOD:

...

Mount PointsDisks
/data2, /data22 and /data222 (if exists)Disks connected to NUMA node 1
/data3, /data33 and /data333 (if exists)Disks connected to NUMA node 2
/data4, /data44 and /data444 (if exists)Disks connected to NUMA node 3
  • For all other types of federated cell members servers - you may map any the mount point points to any disk.


It is highly recommended to use Use LVM (Logical Volume Manager) to allow flexibility for future storage needscreate the mount points. You may use the following commands as an example of how to configure a single mount point (/data2 in this case):

...

In order to federate and configure the cell member, run the following script in the cell manager once per cell member.
For instance, to federate two cell members, the script should be run twice (in the cell manager) - first time with the IP address of the first cell member, and second time with the IP address of the second cell member.

Important: The script should be executed using the OS root userOS root user, and also requires remote root access over SSH from the cell manager to the cell member.

Execute the script suitable for your environment:

  • In case of physical federated cell members with 4 CPU sockets and NVMe disks:

    Code Block
    languagebash
    themeRDark
    /app/scripts/configure_cell_manager.sh -a <internal IP address of the cell member> -g <external IP address of the cell member> -i physical


  • In case of physical federated cell member with 2 CPU sockets or SSD disks:

    Code Block
    languagebash
    themeRDark
    /app/scripts/configure_cell_manager.sh -a <internal IP address of the cell member> -g <external IP address of the cell member> -i physical -n true


  • In case of virtual federated cell member:

    Code Block
    languagebash
    themeRDark
    /app/scripts/configure_cell_manager.sh -a <internal IP address of the cell member> -g <external IP address of the cell member> -i virtual


The script writes two log files - one in the cell manager and one in the cell member. The log file names are mentioned in the script's output.
In case of a failure, the script will try to rollback the configuration changes it made, so the problem can be fixed before rerunning it again.

Updating Configuration for

...

Physical Federated Cell Members with 4 CPU Sockets and NVMe Disks

Note: If the cell member server does not have 4 CPU sokcets sockets or does not have NVMe disks - skip this step.

...