IBM DataPower Operations Dashboard v1.0.17.0

A newer version of this product documentation is available.

You are viewing an older version. View latest at IBM DPOD Documentation.

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

prerequisites

  1. DPOD All-In-One appliance or non-appliance mode.

  2. The system and app disks should support data redundancy (RAID in physical servers or central storage systems).

Backup

  1. Perform backup periodically or snapshots of all the system and app disks as described in table 1 below .

  2. In some cases, data disks cannot be backed up i.e. due Store data sizes. backup everything but the /data disks.

  3. To restore, restore everything from the backup, make sure to have the same IP address, execute the required commands…

Restore

  1. Restore all the system and app disks as described in table 1 below .

  2. If applicable restore /data disk or follow the steps in “create new data disk” section.

Create New Data Disk

Non Appliance Mode

Please consult your system admin on how to add a new disk for /data mount point as described in Prepare Pre-Installed Operating System.
When done follow the steps in “Prepare /data for Application Use” section below.

Appliance Mode

  1. Add a dedicated data disc to the system

    lsblk
    NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sda                       8:0    0   40G  0 disk 
    ├─sda1                    8:1    0    2G  0 part /boot
    ├─sda2                    8:2    0 20.2G  0 part 
    │ ├─vg_root-lv_root     253:0    0    8G  0 lvm  /
    │ ├─vg_root-lv_tmp      253:2    0    4G  0 lvm  /tmp
    │ └─vg_root-lv_var      253:3    0    8G  0 lvm  /var
    └─sda3                    8:3    0    8G  0 part [SWAP]
    sdb                       8:16   0   40G  0 disk 
    ├─sdb1                    8:17   0 11.1G  0 part 
    │ └─vg_logs-lv_logs     253:5    0   11G  0 lvm  /logs
    ├─sdb2                    8:18   0 11.1G  0 part 
    │ └─vg_inst-lv_inst     253:7    0   11G  0 lvm  /installs
    ├─sdb3                    8:19   0  7.1G  0 part 
    │ └─vg_app-lv_app       253:6    0    7G  0 lvm  /app
    ├─sdb4                    8:20   0    1K  0 part 
    ├─sdb5                    8:21   0  4.1G  0 part 
    │ └─vg_apptmp-lv_apptmp 253:4    0    4G  0 lvm  /app/tmp
    └─sdb6                    8:22   0  600M  0 part 
      └─vg_shared-lv_shared 253:1    0  512M  0 lvm  /shared
    sdc                       8:32   0  100G  0 disk 
    sr0                      11:0    1 1024M  0 rom  
  2. Find new disk using lsblk commnand. i.e:

  3. Create logical volume

    pvcreate -ff /dev/sdc
    vgcreate vg_data /dev/sdc 
    lvcreate -l 100%FREE -n lv_data vg_data
    mkfs.xfs -f /dev/vg_data/lv_data 
    echo "/dev/vg_data/lv_data    /data                   xfs     defaults        0 0" >> /etc/fstab
    mount /data
  4. Make sure data partition was created correctly with lsblk:

    lsblk
    NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    ...
    sdc                       8:32   0  100G  0 disk 
    └─vg_data-lv_data       253:8    0  100G  0 lvm  /data
    ...

Prepare /data for Application Use

  1. Recreate directory structure by running the following commands:

    mkdir -m 755 -p /data/aggregatorAgents
    mkdir -m 755 -p /data/backups
    mkdir -m 755 -p /data/balancerAgents
    mkdir -m 755 -p /data/es-raw-trans
    mkdir -m 755 -p /data/keepalive
    mkdir -m 755 -p /data/Logical-Trans
    mkdir -m 755 -p /data/opensearch-dashboards
    mkdir -m 755 -p /data/reports
    mkdir -m 755 -p /data/reports/alerts-internal
    mkdir -m 755 -p /data/reports/reports-internal
    mkdir -m 755 -p /data/resources
    mkdir -m 755 -p /data/retention
    mkdir -m 755 -p /data/syslogAgents
    mkdir -m 755 -p /data/ui
    mkdir -m 755 -p /data/wsmAgents
    mkdir -m 755 -p /data/wsmAgents-internal-data
    chown /data/es-raw-trans --reference=/app/opensearch_nodes
  2. Start store services using app-utils.sh

  3. Recreate store security indices, run the following command and make sure it ends with the message Done with success.

    /app/opensearch_base/plugins/opensearch-security/tools/securityadmin.sh \
        -cd /app/opensearch_base/plugins/opensearch-security/securityconfig \
        -icl -nhnv -cacert /app/keys/store/dpod-es-ca-cert.pem \
        -cert /app/keys/store/dpod-es-admin-cert.pem -key /app/keys/store/dpod-es-admin-key.pem \
        -h montier-es --accept-red-cluster
  4. start all services with app-utils.sh

Table 1 - File Systems / Mount Points

File System / Mount Point

Disk

Space in Mib

Device Type

File System

biosboot

sys (sda)

2

Standard Partition

BIOS BOOT

swap

sys (sda)

8192

LVM

swap

/boot

sys (sda)

2048

Standard Partition

XFS

/boot/efi

sys (sda)

200
(for UEFI installations for GPT partition)

Standard Partition

EFI System Partition

/

sys (sda)

8192

LVM

XFS

/var

sys (sda)

8192

LVM

XFS

/tmp

sys (sda)

4096
(recommended 16384)

LVM

XFS

/shared

app (sdb)

512

LVM

XFS

/app

app (sdb)

8192

LVM

XFS

/app/tmp

app (sdb)

4096

LVM

XFS

/installs

app (sdb)

11264

LVM

XFS

/logs

app (sdb)

12,288
(can be on other fast disk - preferred locally)

LVM

XFS

/data

data (sdc)

As described in Hardware and Software Requirements or according to the sizing spreadsheet in case one was provided by DPOD support team. Minimum of 100GB.

LVM

XFS

  • No labels