IBM DataPower Operations Dashboard v1.0.20.x
A newer version of this product documentation is available.
You are viewing an older version. View latest at IBM DPOD Documentation.
Cell Environment Installation Steps
Cell Manager Installation
Make sure to meet the prerequisites listed at the top of this page.
For Non-appliance Mode, follow the procedure:Â Prepare Pre-Installed Operating System.
For Non-appliance Mode, follow the procedure:Â Non-Appliance Installation.
For Appliance Mode, follow the procedure:Â Appliance Installation.
During installation, when prompted to choose the data disk type (SSD / non SSD), choose the cell members disk type (should be SSD) instead of the cell manager disk type.
During installation, when prompted to choose the IP address for the Web Console, choose the IP address of the external network interface.
Federated Cell Member Installation
The following section describes the installation process of a single Federated Cell Member (FCM). Please repeat the procedure for every FCM installation.
Make sure to meet the prerequisites listed at the top of this page.
Follow the procedure:Â Prepare Pre-Installed Operating System.
Physical servers should use RHEL as the operating system (and not CentOS).
The cell member server should contain disks according to the recommendations made in the sizing process with IBM Support Team, which includes disks for OS, install, and data (one for /data and 6 to 9 additional disks for /data2/3/4...).
Physical federated cell members with 4 CPU sockets and NVMe disks require special disks and mount points configuration to ensure performance. See Configuring Cell Members with 4 CPU Sockets and NVMe Disks.
Use Non-appliance Mode and follow the procedure: Non-Appliance Installation
During installation, the four-letter Installation Environment Name should be identical to the one that was chosen during the Cell Manager installation.
During installation, when prompted to choose the IP address for the Web Console, choose the IP address of the external network interface.Make sureÂ
httpd
service is running and can be restarted successfully. If an error is displayed during the service restart, please see if the following information helps in resolving it:Â Why httpd service fails to start by default and doesn't create
systemctl restart httpd
Configuring Mount Points of Cell Member
List of Mount Points
The cell member server should contain disks according to the recommendations made in the sizing process with IBM Support Team, which includes disks for OS, install, and data (one for /data and 6 to 9 additional disks for /data2/3/4...). The data disks should be mounted to different mount points. The required mount points are:
In case the server has 6 disks: /data2, /data22, /data3, /data33, /data4, /data44
In case the server has 9 disks: /data2, /data22, /data222, /data3, /data33, /data333, /data4, /data44, /data444
Mapping Mount Points to Disks
Map the mount points to disks:
In case of physical federated cell members with 4 CPU sockets and NVMe disks - use the information gathered at Configuring Cell Members with 4 CPU Sockets and NVMe Disks to map the mount point with the proper disk:
Mount Points | Disks |
---|---|
/data2, /data22 and /data222 (if exists) | Disks connected to NUMA node 1 |
/data3, /data33 and /data333 (if exists) | Disks connected to NUMA node 2 |
/data4, /data44 and /data444Â (if exists) | Disks connected to NUMA node 3 |
For all other types of federated cell members servers - you may map the mount points to any disk.
Creating Mount Points
Use LVM (Logical Volume Manager) to create the mount points. You may use the following commands as an example of how to configure a single mount point (/data2 on disk nvme0n1 in this case):
pvcreate -ff /dev/nvme0n1
vgcreate vg_data2 /dev/nvme0n1
lvcreate -l 100%FREE -n lv_data vg_data2
mkfs.xfs -f /dev/vg_data2/lv_data
echo "/dev/vg_data2/lv_data /data2 xfs defaults 0 0" >> /etc/fstab
mkdir -p /data2
mount /data2
Inspecting final configuration
Execute the following command and verify mount points (this example is for 6 disks per cell member and does not include other mount points that should exist):
lsblk
Expected output:
Cell Member Federation
In order to federate and configure the cell member, run the following script in the cell manager, once per cell member.
Important: The script should be executed using the OS root user, and also requires remote root access over SSH from the cell manager to the cell member.
Execute the script suitable for your environment:
In case of a physical federated cell members with 4 CPU sockets and NVMe disks:
In case of a physical federated cell member with 2 CPU sockets or SSD disks:
In case of a virtual federated cell member:
The script writes two log files - one in the cell manager and one in the cell member. The log file names are mentioned in the script's output.
In case of a failure, the script will try to rollback the configuration changes it made, so the problem can be fixed before rerunning it again.
If the rollback fails, and the cell member services do not start successfully, it might be required to uninstall DPOD from the cell member, reinstall and federate it again.
If the SSH connection to the cell manager is lost during the federation process, the federation process will still continue. Reconnect to the cell manager and check the log files for the process status and outcome.
Reboot the Federated Cell Member
Execute the following command to reboot the cell member:
Cell Member Federation Verification
After a successful federation, you will be able to see the new federated cell member in the Manage → System → Nodes page. For example:
Also, the new agents will be shown in the agents list in the Manage → Internal Health → Agents page:
Configure the Monitored Gateways to Use the Federated Cell Member Agents
Configure the monitored gateways to use the federated cells agents. Please follow instructions on Adding Monitored Gateways.
Â
Â
Â
Copyright © 2015 MonTier Software (2015) Ltd.