Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Subject

Action

Supported operating system

Verify that the Install an operating system that is supported by DPOD as described in Hardware and Software Requirements. After Verify the server OS is installed, this can be verified installed OS using the following command:

Code Block
 cat /etc/redhat-release

Resources allocation

Allocate resources according to the chosen deployment profile as listed in Hardware and Software Requirements. After the server OS is installed, this can be verified Verify the allocated resources using the following commands:

Code Block
free -h
lscpu

Network requirements

Ensure you have at least one network interface installed and configured with full access to network services, such as DNS and NTP.
Some configurations, such as the Cell environment, require 2 network interfaces.
See Firewall Requirements for more details.

Root access

The installation must be performed by a root user. You cannot use sudo instead.

  • Do not override the PATH variable with a fixed value during login sequence, as this will override the value set by DPOD installation in .bash_profile and will cause various scripts to fail.

  • Do not use script command during the login sequence to make a typescript of the terminal session for audit, as this will cause various scripts to hang.

  • Do not use trap command to clear the terminal on session close, as this will cause various scripts to get extra characters as their input and fail.

  • Do not print a disclaimer in .bashrc, as this will cause various scripts to get the disclaimer as their input and fail.

Disks, mount points, file systems and logical volumes

DPOD requires at least 3 disks (LUNs / physical / virtual):

  • 1 disk for the operating system

,
  • 1 disk for the application/logs

, and
  • At least 1 disk for the data

Some configurations, such as the Cell environment, require additional multiple disks for the data.
Please allocate the mount points / file systems on the different disks, as described in Table 1 below. 
It is strongly recommended to use logical volume manager (LVM) - particularly for the data disksdisk(s). See Example: Creating File Systems using LVM.
Once configured, you may verify the configuration using the following command:

Code Block
lsblk

Tip: to create the mount points / file systems during RHEL installation:

  • Choose Installation Destination option.

  • Select all Local Standard drives and choose option "I will configure partitioning" under the "Other Storage Options" section.
  • Follow the table below and add all mount points with required definitions using the "+" button.

  • To create a volume group (sys, app, data), when applicable, open the "Volume Group" listbox and choose "create new volume group ...".

  • Tip: To use LVM in AWS EC2 instances with RHEL 8.x and EBS disks, first execute dnf install lvm2 to install the LVM package, and use gdisk to create a partition. For more information, see https://aws.amazon.com/premiumsupport/knowledge-center/create-lv-on-ebs-partition/.

    Store service dedicated OS user and group

    The Store service requires a dedicated OS user and group to run. Consider executing the following command:

    Code Block
    groupadd storeadms && useradd -g storeadms -md /home/storeadm -s /bin/bash storeadm

    OS locale

    The supported OS locale is en_US.UTF-8. Check the OS Locale Configuration and change it if necessary.

    SELinux configuration

    Changes in SELinux configuration might be needed. Check if SELinux is enabled using the following command:

    Code Block
    sestatus

    If SELinux is enforced on the DPOD server, please review possible required configuration changes.

    Setup DNS

    Setup It is highly recommended to setup DNS - your network admin may need to assist you with this action.

    Using yum on RedHat

    For RedHat only: Your system might need to be registered and subscribed to the Red Hat Customer Portal to be able to install all prerequisites using yum.
    Registration and subscription may differ between organizations and RHEL version, so use consider the following commands just as a demonstrationan example:

    Code Block
    subscription-manager register
    subscription-manager attach --auto
    • For RHEL 7.x

      Code Block
      subscription-manager repos --enable=rhel-7-server-rh-common-rpms
      subscription-manager repos --enable=rhel-7-server-optional-rpms
    • For RHEL 8.x

      Code Block
      subscription-manager repos --enable rhel-8-for-x86_64-baseos-rpms
      subscription-manager repos --enable rhel-8-for-x86_64-appstream-rpms

    Setup NTP

    Setup It is highly recommended to setup NTP - it has to be the same one configured in your IBM DataPower Gateways.

    • Consult your Linux and network admin about the proper way to configure this service.

    • For RHEL 7.x, ensure the NTP RPM is installed. Consider executing the following commands:

      Code Block
      yum install ntp
      ntpdate <ntp server hostname>
      systemctl enable ntpd.service
      systemctl start ntpd.service
    • For RHEL 8.x, ensure the Chrony RPM is installed. Consider executing the following commands:

      Code Block
      yum install chrony
      chronyd -q 'server {ntp_server_name} iburst'
      systemctl enable chronyd.service
      systemctl start chronyd.service

    Setup hosts file

    Verify that the /etc/hosts file includes an entry with your server name mapped to your external server IP.
    To find display your server name, you may execute the command hostname.
    To display your server’s IP address, you may execute the command ip a.

    Required RPMs

    Verify the existence of the following RPMs from the official RedHat/CentOS yum repositories:

    • httpd version 2.4.6-67 and above (together with the following dependencies: mailcap, apr, httpd_tools)

    • mod_ssl

    • mod_proxy_html

    • curl

    • wget

    • unzip

    • iptables

    • iptables-services

    • bc

    • fontconfig

    • squashfs-tools (make sure squashfs module is loaded - see more at https://access.redhat.com/solutions/5477831 - and that it is not disabled in /etc/modprobe.d)

    • numactl

    • pciutils

    • nvme-cli

    The installation is usually performed by executing yum. If the command fails to find the packages, you should manually download the RPM files and install them.

    Code Block
    yum install -y httpd
    yum install -y mod_ssl
    yum install -y mod_proxy_html
    yum install -y curl
    yum install -y wget
    yum install -y unzip
    yum install -y iptables
    yum install -y iptables-services
    yum install -y bc
    yum install -y fontconfig
    yum install -y squashfs-tools
    yum install -y numactl
    yum install -y pciutils
    yum install -y nvme-cli

    The following RPMs are recommended for system maintenance and troubleshooting, but and are optional: telnet  telnet client, net-tools, iftop, tcpdump


    Ensure the httpd service is enabled and started by executing the command:

    Code Block
    systemctl enable httpd.service && systemctl start httpd.service && systemctl status httpd.service

    Cleanup

    In case you are using yum, it is recommended to clean its cache to make sure there is enough space in /var (yum cache can take a lot of the space there). To clean yum cache, execute the following command:

    Code Block
    yum clean all


    Table 1 - File Systems / Mount Points

    Space in Mibsys (sharedapp (data (sdc)

    File System / Mount Point

    Disk

    Minimum Size

    Device Type

    File System

    biosboot

    Operating System Disk (e.g.: sda)

    biosboot

    22MB

    Standard Partition

    BIOS BOOT

    swap

    sys (sda)

    81928GB

    LVM

    swap

    /boot

    sys (sda)

    20482GB

    Standard Partition

    XFS

    /boot/efi

    sys (sda)

    200200MB
    (for UEFI installations for GPT partition)

    Standard Partition

    EFI System Partition

    /

    sys (sda)

    81928GB

    LVM

    XFS

    /var

    sys (sda)

    81928GB

    LVM

    XFS

    /tmp

    sys (sda)

    4096
    (recommended 16384)

    15GB

    LVM

    XFS

    Application/

    logs Disk (e.g.: sdb)

    /shared

    5121GB

    LVM

    XFS

    /app

    app (sdb)

    819230GB

    LVM

    XFS

    /app/tmp

    app (sdb)

    40968GB

    LVM

    XFS

    /installs

    app (sdb)

    1126430GB

    LVM

    XFS

    /logs

    app (sdb)

    12,288
    (can be on other fast disk - preferred locally)15GB

    LVM

    XFS

    /data

    Data Disk(s) (e.g.: sdc, sdd, sde...)

    /data

    As described in Hardware and Software Requirements or according to the sizing spreadsheet in case one was provided by the DPOD support team. Minimum of 100GB.

    LVM

    XFS

    [Required only for cell members]
    /data2, /data22, /data222, /data3, /data33, /data333, /data4, /data44, /data444Dedicated 6 or 9 disks

    Only for cell members, according to the sizing spreadsheet provided by DPOD support team. See Setup a Cell Environment for information about these disks/mount points.

    LVM

    XFS

    ...