IBM DataPower Operations Dashboard v1.0.17.0

A newer version of this product documentation is available.

You are viewing an older version. View latest at IBM DPOD Documentation.

Data Migration Procedure

This procedure is intended for users who want to migrate from one DPOD installation to another for the following scenarios:

  • Migration from DPOD Appliance mode installation version v1.0.0 with CentOS 6.7 to CentOS 7.2 introduced at version v1.0.2+.
  • Migration from DPOD on a virtual server to physical server (e.g when load increased and requires a physical server based installation).
  • Migration from DPOD Appliance to Non-Appliance mode (RHEL) to better comply with the organization's technical / security requirements and standards.

New procedure and tools were introduced to support customers with migration of an existing DPOD Store data to a new DPOD installation in each of the scenarios above.

The procedure includes the following main steps :

  • Gather required artifacts from current (migrate from) DPOD installation.
  • Install new DPOD "clean" installation.
  • Import artifacts to the new DPOD installation.

Pre Requisites

  • Both systems (current and new) must be with the same DPOD version, 1.0.6.0 and above.


We highly recommend to contact DPOD support during the planing of migration in order to verify the technical procedure.


Collect Required Artifacts from Source System

Application Files and Internal DB

Invoke the backup command

app_backup.sh
INFO : backup finished successfully. for more information see log file /installs/system-backup/full-backup-2017-09-11_17-17-59/full-backup-2017-09-11_17-17-59.log

The output backup directory will be the location of the backup log as printed in the backup status message (in the example above this is /installs/system-backup/full-backup-2017-09-11_17-17-59 ).

Copy the backup directory to a temporary location outside the current DPOD system.

Services Files

DPOD's service files are located in the /etc/init.d directory

The service files will not be migrated to the new system because they are not compatible with the new OS version.

If the user altered one of the service files manually, it is their responsibility to migrate these changes to the new service files.

User Custom Artifacts


If the user is using any custom artifacts which are NOT located in one of the system builtin locations, it is the customer's responsibility to migrate these artifacts from the current system to the new one.

Examples of custom artifacts include custom key stores used for DPOD SSL client authentication.


Creating New System

Install new System

Install a new DPOD system using version 1.0.2.0 ISO file and apply needed updates (fix) in order for the the current system and the new system to have the same DPOD version.

Disable Log Targets

Since DPOD will not be available during the migration process we recommend to disable the monitored device's log targets on the current (old) DPOD installation as describe in Configuring Monitored Gateways.

Move The Data Disk - Optional

The current transactions data is stored in the BigData store located on the OS mount point /data .

It is not mandatory to migrate the current transaction data to the new system.

Not migrating the transaction data means losing current transaction data!


To migrate the current transactions data to the new system please follow the procedure below.

If you choose NOT to migrate transaction data (only configuration data) skip to "Create Staging Directory"


All technical names in the following section are used in DPOD Appliance mode installation.

If the user installation is Non Appliance RHEL installation the technical name may be different (based on the organizational standard). Please contact your system administrator.

Exporting The Data LVM Configuration (Volume Group) On The Source Installation

The /data mount point is mapped to the LVM volume group vg_data

  • Stop the application services using the Command Line Interface (CLI) (Option 2 "Stop All" )
  • un-mount the /data mount point

    umount /data
  • Mark the volume group as non active

    vgchange -an vg_data
    output : 0 logical volume(s) in volume group "vg_data" now active
  • Export the volume group

    vgexport vg_data
    output : Volume group "vg_data" successfully exported
  • Comment /data mount point in OS FS table

    Comment the following line in /etc/fstab

    #/dev/mapper/vg_data-lv_data /data                   ext4    defaults        1 2

Disconnect the Data Disk and Connect to the New System

Stop The System

shutdown the server (virtual / physical ) using the command

shutdown -h 0

Virtual Environment

Copy the Virtual Data Disk From the Current VM
  • Edit the current virtual machine settings
  • Locate the data disk (hard drive number 3)
  • It is recommended to copy the data disk vmdk file to the new system directory (we recommend NOT to move the vmdk file but copy it, in order to retain a fallback option in case of an issue during migration).

Edit the New System OS FS table

Change the /data mount point in OS FS table

From : /dev/mapper/vg_data-lv_data /data                   xfs     defaults        0 0
To   : /dev/mapper/vg_data_old-lv_data /data               xfs     defaults        0 0


Rename /data LVM volume Group

Rename the data volume group vg_data in the new system to avoid volume group collision when connecting the data disk from the old system

vgrename vg_data vg_data_old

output : Volume group "vg_data" successfully renamed to "vg_data_old"


Connect the Virtual Disk to New VM
  • Shut down the new system

shutdown -h 0


  • Configure the virtual disk on the new system by adding a new hard drive and choosing existing


  • Start the new VM
Configure the New Disk
  • Make sure the new exported volume group (LVM vg) and physical volume (LVM pv) are recognized by the OS

pvscan
output :
  PV /dev/sdb3   VG vg_app        lvm2 [7.08 GiB / 80.00 MiB free]
  PV /dev/sdb1   VG vg_logs       lvm2 [11.08 GiB / 84.00 MiB free]
  PV /dev/sdb5   VG vg_apptmp     lvm2 [4.10 GiB / 100.00 MiB free]
  PV /dev/sdb6   VG vg_shared     lvm2 [596.00 MiB / 84.00 MiB free]
  PV /dev/sdd1    is in exported VG vg_data [101.97 GiB / 0    free]
  PV /dev/sda2   VG vg_root       lvm2 [10.19 GiB / 196.00 MiB free]
  PV /dev/sdb2   VG vg_inst       lvm2 [7.08 GiB / 80.00 MiB free]
  PV /dev/sdc1   VG vg_data_old   lvm2 [100.00 GiB / 20.00 MiB free]
  Total: 8 [242.07 GiB] / in use: 8 [242.07 GiB] / in no VG: 0 [0   ]


vgscan
output :
  Reading all physical volumes.  This may take a while...
  Found volume group "vg_data_old" using metadata type lvm2
  Found volume group "vg_inst" using metadata type lvm2
  Found volume group "vg_root" using metadata type lvm2
  Found exported volume group "vg_data" using metadata type lvm2
  Found volume group "vg_shared" using metadata type lvm2
  Found volume group "vg_apptmp" using metadata type lvm2
  Found volume group "vg_logs" using metadata type lvm2
  Found volume group "vg_app" using metadata type lvm2


  • Import the data volume group from the new disk added to VM

    vgimport vg_data
    
    output :
      Volume group "vg_data" successfully imported
  • Activate the imported data volume group
vgimport vg_data

output :
   1 logical volume(s) in volume group "vg_data" now active


  • Verify the data volume group status

    vgdisplay vg_data
    
    output :
       --- Volume group ---
      VG Name               vg_data
      System ID
      Format                lvm2
      Metadata Areas        1
      Metadata Sequence No  4
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                1
      Open LV               0
      Max PV                0
      Cur PV                1
      Act PV                1
      VG Size               101.97 GiB
      PE Size               32.00 MiB
      Total PE              3263
      Alloc PE / Size       3263 / 101.97 GiB
      Free  PE / Size       0 / 0
      VG UUID               4vIe7h-qqLR-6qEa-aRID-dU8w-U5E2-gV7FoJ
Add the new mount point to the OS FS table
  • Configure the /data mount point to the OS FS table by adding the following line

/dev/mapper/vg_data-lv_data /data                   ext4     defaults        1 2


  • Comment out the following line

    #/dev/mapper/vg_data_old-lv_data /data                   xfs     defaults        0 0
  • Restart the system

    reboot
  • Ensure the /data mount point is mounted using vg_data volume group

    df -h
    output :
    Filesystem                       Size  Used Avail Use% Mounted on
    /dev/mapper/vg_root-lv_root      4.0G  1.6G  2.5G  39% /
    devtmpfs                         7.9G     0  7.9G   0% /dev
    tmpfs                            7.9G   56K  7.9G   1% /dev/shm
    tmpfs                            7.9G  9.1M  7.9G   1% /run
    tmpfs                            7.9G     0  7.9G   0% /sys/fs/cgroup
    /dev/sda1                        2.0G  101M  1.8G   6% /boot
    /dev/mapper/vg_data-lv_data      101G   81M   96G   1% /data
    /dev/mapper/vg_root-lv_var       4.0G  109M  3.9G   3% /var
    /dev/mapper/vg_logs-lv_logs       11G   44M   11G   1% /logs
    /dev/mapper/vg_shared-lv_shared  509M   26M  483M   6% /shared
    /dev/mapper/vg_root-lv_tmp       2.0G  726M  1.3G  36% /tmp
    /dev/mapper/vg_inst-lv_inst      7.0G  2.8G  4.3G  40% /installs
    /dev/mapper/vg_app-lv_app        7.0G  1.4G  5.7G  20% /app
    /dev/mapper/vg_apptmp-lv_apptmp  4.0G   33M  4.0G   1% /app/tmp
    tmpfs                            1.6G     0  1.6G   0% /run/user/0
    
  • Start the application services using the Command Line Interface (CLI) (Option 1 "Start All" )
Verify The Application Is Working Properly

Login to DPOD's WebUI and use the "DPOD Health" screens to verify all components are up and running.

Remove the Old data volume group
  • Mark the volume group as non active

    vgchange -an vg_data_old
    output : 0 logical volume(s) in volume group "vg_data_old" now active
  • Export the volume group

    vgexport vg_data_old
    output : Volume group "vg_data_old" successfully exported
  • Shut down the new system

    shutdown -h 0
  • Remove the unused virtual disk from the VM (should be the 3rd virtual hard drive )


  • Start the VM
Verify The Application Is Working Properly

Login to DPOD's WebUI and use the "DPOD Health" screens to verify all components are up and running.

Physical Environment

When using a physical environment the data disk can be either local storage (usually SSD) or a remote central storage (SAN)

In both cases the procedure is similar to the one used for the virtual environment. However, on a physical server the local / remote storage should be physically moved to the new server.

  • Edit the New System's OS FS table
  • Rename the /data LVM volume Group
  • Configure a new disk
  • Add the new mount point to the OS FS table
  • Verify the application is working properly


Create Staging Directory

Create a staging directory on the new system 

mkdir -p /installs/system-backup/system-migration

Restore Internal DB

Copy the backup directory from the source system to the staging directory.

Invoke the restore command for the internal DB, where :

-t : the restore type (db)

-d : the restore source backup directory. In the example below this is /installs/system-backup/system-migration/full-backup-2017-09-13_22-38-56

-f : the source backup file in the backup directory. In the example below this is full-backup-2017-09-13_22-38-56.tar.gz


app_restore.sh -t db -d /installs/system-backup/system-migration/full-backup-2017-09-13_22-38-56 -f full-backup-2017-09-13_22-38-56.tar.gz

stopping application ...
application stopped successfully.
starting restore process ...
restoring internal DB .....
system restore was successful
for more information see log file /installs/system-backup/db-restore-2017-09-17_15-04-19.log

Restore Application Files

Copy the backup directory from the source system to the staging directory (not required if copied during internal DB restore)

Invoke the restore command for the application, where :

-t : the restore type (app)

-d : the restore source backup directory. In the example below this is /installs/system-backup/system-migration/full-backup-2017-09-13_22-38-56

-f : the source backup file in the backup directory. In the example below this is full-backup-2017-09-13_22-38-56.tar.gz


app_restore.sh -t app -d /installs/system-backup/system-migration/full-backup-2017-09-13_22-38-56 -f full-backup-2017-09-13_22-38-56.tar.gz

stopping application ...
application stopped successfully.
starting restore process ...
restoring system files ...
making sure files to restore exist in backup file
files to restore exist in backup file
system restore was successful
for more information see log file /installs/system-backup/app-restore-2017-09-17_15-04-19.log

Change the Agent's IP Address


This section is applicable only if the agent's IP address on the new DPOD system is different to the current IP address. In most of the installations the agent's IP address is identical to DPOD's server IP address

If the new DPOD system has a different IP address to the current one, the user must change the "Agent IP" in the nodes management screen :

Update the Store Configuration File

/app/scripts/update_store_allocation.sh -l 4

Verify The Application Is Working Properly

Login to DPOD WebUI and use the "DPOD Health" screens to verify all components are up and running.

Make sure the new transaction data from the monitored devices is visible using the "Investigate" screen


IBM DataPower Operations Dashboard (DPOD) v1.0.17.0