...
- Migration from DPOD Appliance mode installation version v1.0.0 with CentOS 6.7 to CentOS 7.2 introduced at version v1.0.2+.
- Migration from DPOD on a virtual server to physical to a physical server (e.g when the load increased and requires a physical server based installation).
- Migration from DPOD Appliance to Non-Appliance mode (RHEL) to better comply with the organization's technical / security requirements and standards.
New procedure procedures and tools were introduced to support customers with the migration of an existing DPOD Store data to a new DPOD installation in each of the scenarios above.
...
- Gather required artifacts from current (migrate from) DPOD installation.
- Install a new DPOD "clean" installation.
- Import artifacts to the new DPOD installation.
...
- Both systems (current and new) must be with the same DPOD version, 1.0.6.0 and above.
Note |
---|
We highly recommend to contact contacting DPOD support during the planing planning of migration in order to verify the technical procedure. |
...
Examples of custom artifacts include custom key stores used for DPOD SSL client authentication.
Anchor | ||||
---|---|---|---|---|
|
Install new System
Install a new DPOD system using version 1.0.2.0 ISO file and apply needed updates (fix) in order for the the current system and the new system to have the same DPOD version.
...
Since DPOD will not be available during the migration process we recommend to disable the monitored device's log targets on the current (old) DPOD installation as describe described in "Disable / Enable DPOD's Log Targets".
...
Note |
---|
It is not mandatory to migrate the current transaction data to the new system. Not migrating the transaction data means losing current transaction data! |
...
To migrate the current transactions data to the new system please follow the procedure below.
If you choose NOT to migrate transaction data (only configuration data) skip to "Create Staging Directory"
Note |
---|
All technical names in the following section are used in DPOD Appliance mode installation. If the user installation is Non-Appliance RHEL installation the technical name may be different (based on the organizational standard). Please contact your system administrator. |
...
- Stop the application services using the Main Admin Menu CLI (option 2 → "stop all" )
un-mount the /data mount point
Code Block language bash theme RDark umount /data
Mark the volume group as non-active
Code Block language bash theme RDark vgchange -an vg_data output : 0 logical volume(s) in volume group "vg_data" now active
Export the volume group
Code Block language bash theme RDark vgexport vg_data output : Volume group "vg_data" successfully exported
Comment /data mount point in OS FS table
Comment the following line in /etc/fstabCode Block language bash theme Confluence linenumbers true #/dev/mapper/vg_data-lv_data /data ext4 defaults 1 2
...
Change the /data mount point in the OS FS table
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
From : /dev/mapper/vg_data-lv_data /data xfs defaults 0 0 To : /dev/mapper/vg_data_old-lv_data /data xfs defaults 0 0 |
...
Mark the volume group as non-active
Code Block language bash theme RDark vgchange -an vg_data_old output : 0 logical volume(s) in volume group "vg_data_old" now active
Export the volume group
Code Block language bash theme RDark vgexport vg_data_old output : Volume group "vg_data_old" successfully exported
Shut down the new system
Code Block language bash theme RDark shutdown -h 0
- Remove the unused virtual disk from the VM (should be the 3rd virtual hard drive )
...
When using a physical environment the data disk can be either local storage (usually SSD) or a remote central storage (SAN)
In both cases the procedure is similar to the one used for the virtual environment. However, on a physical server, the local/remote storage should be physically moved to the new server.
- Edit the New System's OS FS table
- Rename the /data LVM volume Group
- Configure a new disk
- Add the new mount point to the OS FS table
- Verify the application is working properly
Anchor | ||||
---|---|---|---|---|
|
Create a staging directory on the new system
...
Note |
---|
This section is applicable only if the agent's IP address on the new DPOD system is different to from the current IP address. In most of the installations, the agent's IP address is identical to DPOD's server IP address |
...
- Start the application services using the Main Admin Menu CLI (option 1 → "start all" )
- In the web console navigate to system → nodes and edit the IP address of the agents in your data node raw.
- Re-configure syslog for the default domain on each monitored device (the Setup Syslog for device on the Device Management section)
- Restart the keepalive service using the Main Admin Menu CLI
...