TODO
- same ip / different ip
- data disk size is different
- data disk
- check different number of agents / nodes
- rollback
- verify system working
This procedure is intended for users who want to migrate from one DPOD installation to another for the following scenarios:
- Migration from DPOD Appliance mode installation version v1.0.0 with CentOS 6.7 to CentOS 7.2 introduced at version v1.0.2+.
- Migration from DPOD on a virtual server to physical server (e.g when load increased and requires a physical server based installation).
- Migration from DPOD Appliance to Non-Appliance mode (RHEL) to better comply with the organization's technical / security requirements and standards.
New procedure and tools were introduced to support customers with migration of an existing DPOD Store data to a new DPOD installation in each of the scenarios above.
The procedure includes the following main steps :
- Gather required artifacts from current (migrate from) DPOD installation.
- Install new DPOD "clean" installation.
- Import artifacts to the new DPOD installation.
Pre Requisites
- Both systems (current and new) must be with the same DPOD versionNew system should be DPOD version , 1.0.6.0 and aboveOnly the Appliances installation (provided as ISO file) is supported for this migration .
Note |
---|
We highly recommend to contact DPOD support during the planing of migration in order to verify the technical procedure. |
Collect
...
Required Artifacts from Source System
Application Files and Internal DB
...
The output backup directory will be the location of the backup log as printed in the backup status message (in the current example above this is /installs/system-backup/full-backup-2017-09-11_17-17-59 ).
Copy the backup directory to a temporary location outside the current DPOD system.
Services Files
DPOD's services service files are located in directory the /etc/init.d directory
The services files will not be migrated to the new system because they are not compatible with the new OS version.
If the user altered manually one of the services files it is the user responsibility to migrate these changes to the new services files.
User Custom Artifacts
Note |
---|
If the user is using any custom artifacts which are NOT located in one of the system builtin locations it is the customer responsibility to migrate these artifacts from the current system to the new one. |
example for custom artifacts can be custom key stores used for DPOD SSL client authentication.
Creating New System
TODO - disable devices
Install new System
Install new DPOD installation using ISO file (version 1.0.5.0 and above ) and apply needed updates (fix) in order for the the current system and the new system to have the same DPOD version.
TODO - need to start from 1.0.2 + fix to 1.0.6
Move The Data Disk - Optional
The current transactions data is store in the BigData store located on the OS mount point /data (the configuration data is kept in the internal DB which was recovered in previous steps).
Note |
---|
It is not mandatory to migrate the current transaction data to the new system. Not migrating the transaction data means loosing current transaction data ! |
For migrating the current transactions data to the new system please follow the procedureservice files will not be migrated to the new system because they are not compatible with the new OS version.
If the user altered one of the service files manually, it is their responsibility to migrate these changes to the new service files.
User Custom Artifacts
Note |
---|
If the user is using any custom artifacts which are NOT located in one of the system builtin locations, it is the customer's responsibility to migrate these artifacts from the current system to the new one. |
Examples of custom artifacts include custom key stores used for DPOD SSL client authentication.
Anchor | ||||
---|---|---|---|---|
|
Install new System
Install a new DPOD system using version 1.0.2.0 ISO file and apply needed updates (fix) in order for the the current system and the new system to have the same DPOD version.
Disable Log Targets
Since DPOD will not be available during the migration process we recommend to disable the monitored device's log targets on the current (old) DPOD installation as describe in "Disable / Enable DPOD's Log Targets".
Move The Data Disk - Optional
The current transactions data is stored in the BigData store located on the OS mount point /data .
Note |
---|
It is not mandatory to migrate the current transaction data to the new system. Not migrating the transaction data means losing current transaction data! |
To migrate the current transactions data to the new system please follow the procedure below.
If you choose NOT to migrate transaction data (only configuration data) skip to "Create Staging Directory"
Note |
---|
All technical names in the following section are used in DPOD Appliance mode installation. If the user installation is Non Appliance RHEL installation the technical name may be different (based on the organizational standard). Please contact your system administrator. |
Exporting The Data LVM Configuration (Volume Group) On The Source Installation
The /data mount point is mapped to the LVM volume group vg_data .
- Stop the application services using the Main Admin Menu CLI (option 2 → "stop all" )
un-mount the /data mount point
Code Block language bash theme RDark umount /data
Mark the volume group as non active
Code Block language bash theme RDark vgchange -an vg_data output : 0 logical volume(s) in volume group "vg_data" now active
Export the volume group
Code Block language bash theme RDark vgexport vg_data output : Volume group "vg_data" successfully exported
Comment /data mount point in OS FS table
Comment the following line in /etc/fstabCode Block language bash theme Confluence linenumbers true #/dev/mapper/vg_data-lv_data /data ext4 defaults 1 2
- TODO - not needed if reinstalling the same server
Disconnect the Data Disk and Connect to the New System
Stop The System
shutdown the server (virtual / physical ) using the command
Code Block | ||||
---|---|---|---|---|
| ||||
shutdown -h 0 |
Virtual Environment
Copy the Virtual Data Disk From the Current VM
- Edit the current virtual machine settings
- Locate the data disk (hard drive number 3)
- It is recommended to copy the data disk vmdk file to the new system directory (we recommend NOT to move the vmdk file but copy it, in order to keep retain a fallback option if in case of an issue will be raised during migration).
Edit the New System OS FS table
Change the /data mount point in OS FS table
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
From : /dev/mapper/vg_data-lv_data /data xfs defaults 0 0
To : /dev/mapper/vg_data_old-lv_data /data xfs defaults 0 0 |
Rename /data LVM volume Group
Rename the data volume group vg_data in the new system to avoid volume group collision when connecting the data disk from the old system
Code Block | ||||
---|---|---|---|---|
| ||||
vgrename vg_data vg_data_old output : Volume group "vg_data" successfully renamed to "vg_data_old" |
Connect the Virtual Disk to New VM
- Shut down the new system
Code Block | ||||
---|---|---|---|---|
| ||||
shutdown -h 0 |
...
- Configure the virtual disk to on the new system by adding a new hard drive and choosing existing
- Start the new VM
Configure the New Disk
- Make sure the new exported volume group (LVM vg) and physical volume (LVM pv) are recognized by the OS
...
Verify the data volume group status
Code Block language bash theme RDark vgdisplay vg_data output : --- Volume group --- VG Name vg_data System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 101.97 GiB PE Size 32.00 MiB Total PE 3263 Alloc PE / Size 3263 / 101.97 GiB Free PE / Size 0 / 0 VG UUID 4vIe7h-qqLR-6qEa-aRID-dU8w-U5E2-gV7FoJ
Add the new mount point to the OS FS table
- Configure the /data mount point to the OS FS table by adding the following line
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
/dev/mapper/vg_data-lv_data /data ext4 defaults 1 2 |
Comment out the following line
Code Block language bash theme Confluence linenumbers true #/dev/mapper/vg_data_old-lv_data /data xfs defaults 0 0
Restart the system
Code Block language bash theme RDark reboot
Make sure Ensure the /data mount point is mounted using vg_data volume group
Code Block language bash theme RDark df -h output : Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_root-lv_root 4.0G 1.6G 2.5G 39% / devtmpfs 7.9G 0 7.9G 0% /dev tmpfs 7.9G 56K 7.9G 1% /dev/shm tmpfs 7.9G 9.1M 7.9G 1% /run tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup /dev/sda1 2.0G 101M 1.8G 6% /boot /dev/mapper/vg_data-lv_data 101G 81M 96G 1% /data /dev/mapper/vg_root-lv_var 4.0G 109M 3.9G 3% /var /dev/mapper/vg_logs-lv_logs 11G 44M 11G 1% /logs /dev/mapper/vg_shared-lv_shared 509M 26M 483M 6% /shared /dev/mapper/vg_root-lv_tmp 2.0G 726M 1.3G 36% /tmp /dev/mapper/vg_inst-lv_inst 7.0G 2.8G 4.3G 40% /installs /dev/mapper/vg_app-lv_app 7.0G 1.4G 5.7G 20% /app /dev/mapper/vg_apptmp-lv_apptmp 4.0G 33M 4.0G 1% /app/tmp tmpfs 1.6G 0 1.6G 0% /run/user/0
- Start the application services using the Main Admin Menu CLI (option 1 → "start all" )
Verify The Application Is Working Properly
TODO
- → "start all" )
Verify The Application Is Working Properly
Login to DPOD's WebUI and use the "Internal Health" screens to verify all components are up and running.
Remove the Old data volume group
Mark the volume group as non active
Code Block language bash theme RDark vgchange -an vg_data_old output : 0 logical volume(s) in volume volume group "vg_data_old" now active
Export the volume group
Export the volume groupCode Block language bash theme RDark vgexport vg_data_old output : Volume group "vg_data_old" now active
Shut down the new systemCode Block language bash theme RDark vgexport vg_data_old output : Volume group "vg_data_old" successfully exported
Code Block | ||||
---|---|---|---|---|
| ||||
shutdown -h 0 |
- Start the VM
Verify The Application Is Working Properly
...
successfully exported
Shut down the new system
Code Block language bash theme RDark shutdown -h 0
- Remove the unused virtual disk from the VM (should be the 3rd virtual hard drive )
- Start the VM
Verify The Application Is Working Properly
Login to DPOD's WebUI and use the "Internal Health" screens to verify all components are up and running.
Physical Environment
When using a physical environment the data disk can be either local storage (usually SSD) or a remote central storage (SAN)
In both cases the procedure is similar to the one used for the virtual environment. However, on a physical server the local / remote storage should be physically moved to the new server.
- Edit the New System's OS FS table
- Rename the /data LVM volume Group
- Configure a new disk
- Add the new mount point to the OS FS table
- Verify the application is working properly
Anchor | ||||
---|---|---|---|---|
|
Create a staging directory on the new system
...
-d : the restore source backup directory. in the example In the example below this is /installs/system-backup/system-migration/full-backup-2017-09-13_22-38-56
-f : the source backup file in the backup directory. in In the example below this is full-backup-2017-09-13_22-38-56.tar.gz
...
Copy the backup directory from the source system to the staging directory (not needed required if copied during internal DB restore)
...
-d : the restore source backup directory. in In the example below this is /installs/system-backup/system-migration/full-backup-2017-09-13_22-38-56
-f : the source backup file in the backup directory. in In the example below this is full-backup-2017-09-13_22-38-56.tar.gz
...
Code Block | ||||
---|---|---|---|---|
| ||||
app_restore.sh -t app -d /installs/system-backup/system-migration/full-backup-2017-09-13_22-38-56 -f full-backup-2017-09-13_22-38-56.tar.gz stopping application ... application stopped successfully. starting restore process ... restoring system files ... making sure files to restore exist in backup file files to restore exist in backup file system restore was successful for more information see log file /installs/system-backup/app-restore-2017-09-17_15-04-19.log |
Change the Agent's IP Address
Note |
---|
This section is applicable only if the agent's IP address on the new DPOD system is different than to the current IP address. In most of the installations the agent's IP address is identical to DPOD's server IP address |
If the new DPOD system has a different IP address to the current one, the user must change the "Agent IP" in the nodes management screen :
- Startthe Start the application services using the Main Admin Menu CLI (option 1 → "start all" )
- In the web console navigate to system → nodes and edit the IP address of the agents in your data node raw.
- Re-configure syslog for the default domain on each monitored device (the Setup Syslog for device on the Device Management section)
- Restart the keepalive service using the Main Admin Menu CLI
TODO - recalculate retention conf
...
Update the Store Configuration File
In order to reconfigure the Store configuration file based on the data disk size follow the section "Update the Store Configuration File" in the "Increase DPOD's Store Space" procedure.
Verify The Application Is Working Properly
TODO
TODO - Shachar Greenberg please write here the procedureLogin to DPOD WebUI and use the "Internal Health" screens to verify all components are up and running.
Make sure the new transaction data from the monitored devices is visible using the "Investigate" screen