Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
themeRDark
MonTierUpdateInstaller.sh -u DPOD-update-1_0_9.sfs -s DPOD-update-1_0_9.md5

Data Migration Tool

The upgrade process checks the existence of ElasticSearch indices that were created in early versions of the product.
In case that early version indices do exist, the upgrade process will stop and notify you of a manual step that you will need to run :

Code Block
themeRDark
Some of the stored application data cannot be migrated to the latest version of the Store service.

A data migration tool has been deployed in /installs/data-migration-tool.

You may run it using the following command:

/installs/data-migration-tool/data-migration-tool.sh

Further information can be found in the documentation at:

Admin Guide -> Installation and Upgrade -> Upgrade -> Upgrade to 1.0.10.0 - Special Steps

Configuring the Data Migration Tool

You may edit the configuration file /installs/data-migration-tool/data-migration-tool.conf before running the tool, there are two entries of interest:

  • duration.limit (default: 999999) - limit the execution time in minutes, this option is useful if you want to schedule the tool to run at nights, you can limit it to run for a few hours each time, so performance will not suffer during normal working hours.
  • delete.kibana_indices (default: true) -
    "true" - delete old Kibana indices.
    "false" - migrate Kibana indices to the new store version - Note: Even when migrating the Kibana indices to the new version, the Kibana version that comes with DPOD 1.0.9 will not be able to read the old indices.
  • Leave the other settings in data-migration-tool.conf as they are, unless advised by support

Running the Data Migration Tool

Run the data-migration-tool manually , as described :

Code Block
themeRDark
/installs/data-migration-tool/data-migration-tool.sh


The Data-Migration data migration process may run take anywhere between a few minutes and a few days, or depending on the amount of data to migrate, up to a few daysand the server load.
An A rough estimation of the time left to run will be calculated and presented on the console output during the process run. These estimations will also be written to the log file.
The estimation is based on current server load, so it may change significantly between peak and quiet hours.

Note

Make sure not to interrupt SSH session during the Data-Migration operation.
Alternatively, you can run the  Data-Migration in a "no hang-up" mode, which will cause the process to continue running even after the SSH session is closed. In this mode, the console output will be written to the nohup.log file in the local directory.

         nohup /installs/data-migration-tool/data-migration-tool.sh &

...

Code Block
themeRDark
Data migration tool finished successfully

Interrupting the Data Migration Tool

Pressing ctrl+C or setting duration.limit in the configuration file will stop the tool during the migration process.
Stopping the tool will cause it to re-process the last index that was migrated on the next run.

While this is usually not an issue, note that on some cases it may cause complications, for example:
1. The user wants the tool to run during a nightly maintenance window, between 2-4 AM.
2. The tool is scheduled using cron to 2 AM and the duration.limit setting is set to 120 minutes.
3. For this specific user, depending on its hardware and data sizes, processing of each index takes about 3 hours.
4. Since the tool is interrupted after 2 hours, on the next run, the tool will try to migrate the same index again and again.

Resuming Software Update

To proceed with the software update, you can rerun the software update command :

...