Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

TODO - ADD Active/Active - full duplication scenario

TODO - fully explain Active / standby scenario and api process - step by step

...

High Availability (HA), Resiliency or Disaster Recovery (DR) Implementation

There are multiple methods available to achieve DPOD HA/DR planning and configuration. These methods depend are determined based on the customer's requirements, implementation and infrastructure.

...

Terminology

Node State/mode - A DPOD node can be in one of the following states: Active (On, performing monitoring activities), Inactive (Off, not performing any monitoring activities), DR Standby (On, not performing monitoring activities).

Primary Node - A DPOD installation that actively monitors DataPower instances under normal circumstances (Active state).

Secondary Node - A DPOD installation, identical to the Primary Node (In shared storage scenario it is the same image as the primary node) - that is in DR Standby or Inactive state.

3rd party DR software - A software tool that assists in the process of identifying when the primary node state has changed from active to inactive and initiates the process of launching the secondary node as active .

DPOD Scalability vs. HA/DR

The DPOD architecture supports installation of multiple DPOD nodes for scalability - to support high throughput in cases of high rate of transactions per second (TPS). However, this does not provide a solution for HA/DR requirements.

For simplicity, this document assumes that only one DPOD node is installed, but the same scenarios and considerations apply for multiple nodes installations.

Important HA/DR Considerations

Consult your BCP/DR/System/Network Admin and address the following questions before selecting the which method(s) of HA/DR implementation with DPOD to use:

 11. For large installations, DPOD can capture vast volumes of data. Replicating that much data to for DR purposes may consume significant network bandwidth, and may incur 3rd party storage replication license costs.

You should consult and decide: Is it cost effective to replicate DPOD's data, or is it is acceptable to launch DPOD on a backup server another instance of DPOD with configuration replication only?

 


2. The software used for Active/Passive scenario:

Does your Will you run DPOD runs on a virtual infrastructure like VMWare VMware, or can you use VMware VMotion or Active\/Passive Cluster cluster management tools that can help identify and relaunch DPOD on a different cluster member?

...


3. The customer is You are expected to have an Active/Passive software or another mechanism in place to identify when the a DPOD server node becomes inactive, and launch a new one in an active Cluster cluster member.

Do you have such a tool (DR Softwaresoftware)?

...


4. When launching a new DPOD instance on the backup cluster member:

Will the new server instance keep the same network configuration of the primary node instance (for example: IP Address, DNS, NTP, LDAP, SMTP)  or or will the configuration change?

...


5. Some DataPower architecture solutions (Active/Passive or Active/Active) effect DPOD configuration. If the DataPower IP address changes - then your DPOD configuration may need to change.

Does your DataPower architecture use an active/passive deployment? If so - will the passive DataPower have the same IP addresses address when it switches to active?

Common

...

Scenarios for

...

DPOD HA/DR Implementation

Scenario A

...

: Active/Passive - DPOD's IP Address remains the same - Shared Storage

Assumptions:

  1. The customer has DataPower appliances deployed using either an Active/Passive, Active/Standby or Active/Active configuration. All DataPower appliances in any of these configurations have unique IP addresses.
  2. The customer has storage replication capabilities to replicate DPOD disks based on the disks’ replication policy described above.
  3. A primary DPOD node is installed once , and is configured to monitor all DataPower appliances (active, standby and passive). The secondary node will use the same disks on shared storage.
  4. All DPOD network services (NTP, SMTP, LDAP etc.) have retain the same IP address (otherwise addresses in a failover event (or else a post configuration script is required to be run by the DR software)The customer has storage replication capabilities to replicate DPOD disks based on the disks’ replication policy described above.
  5. The customer has a 3rd party software tool or scripts that can:
    1. Identify unavailability of the primary DPOD

...

    1. node.
    2. Launch

...

    1. a secondary DPOD node using the same IP address as the

...

    1. primary one (usually on a different physical hardware).

...

  1. The

...

  1. secondary DPOD

...

  1. node is not operating when business is as usual, as disks replication is required and the secondary node has the same IP address as the

...

  1. primary DPOD node.
  2. This scenario might not be suitable for high load implementations, as replication of DPOD data disk might not be acceptable.

During a disaster:

  1. The customer's DR software should Identify DPOD failure a failure in the DPOD primary node (e.g. by pinging an access IP, User Interface sampling the user interface URL or both).
  2. The customer's DR Software software should launch the passive DPOD server and change its IP address (secondary DPOD node using the same IP address as the failed primary node (or initiate changing the IP address if not already configured ) to be identical to the failed active DPODthat way).

DPOD will be available in the following way:

  • As the passive DPOD has secondary DPOD node has the same IP address, all  all DataPower appliances will be able to access it.
  • Since all DataPower appliances will have the same IP addresses - DPOD can continue to sample them.
  • Since the passive DPOD has secondary DPOD node has the same IP address as the primary one, access to DPOD's console will be with retains the same URL.

Scenario B

...

: Active/Passive – DPOD's IP Address

...

changes - Shared Storage

Assumptions:

  1. The customer has DataPower appliances deployed using either an Active/Passive or Active/Stand-by configuration. All DataPower appliances in any f of these configuration must configurations have unique IP addresses.
  2. The customer has storage replication capabilities to replicate DPOD disks based on the disks’ replication policy described above.
  3. A primary DPOD node is installed once , and is configured to monitor all DataPower appliances (active, standby and passive). The secondary node will use the same disks on shared storage.
  4. All DPOD network services (NTP, SMTP, LDAP etc.) have retain the same IP address (otherwise a post configuration addresses in a failover event (or else a post configuration script is required to be run by the DR software)The customer has storage replication capabilities to replicate DPOD disks based on the disks’ replication policy described above.
  5. The customer has a 3rd party software tool or scripts that can:
    1. Identify unavailability of the primary DPOD

...

    1. node.
    2. Launch

...

    1. a secondary DPOD

...

    1. node using a different IP address to the

...

    1. primary one (usually on a different physical hardware).
  1. The secondary DPOD node is not operating when business is as usual, since disks replication is required.
  2. This scenario might not be suitable for high load implementations, as replication of DPOD data disk might not be acceptable.

During a disaster:

  1. The customer's DR software should Identify DPOD failure a failure in DPOD's primary node (e.g. by pinging an access IP, User Interface sampling the user interface URL or both).
  2. The customer's DR Software software should launch the passive DPOD server and change its IP address (secondary DPOD node using a different IP address to the failed primary node (or initiate changing the IP address if not already configured ) to be a different address to the one used by the failed active DPODthat way).
  3. The customer's DR Software software should execute a commandscommand/script to change DPOD's IP address as described in documentation.
  4. The customer's DR Software must software should change the DNS name for the DPOD servernode's web console to reference an actual IP address or use an NLB in front of both DPOD web consoleconsoles.
  5. DPOD creates 2 aliases in each configured DataPower. The customer's DR Software should run a script with a SOMA request to change the aliases to DPOD current IP address and enable and disable all DPOD log targets .(Please refer to the Devices REST API for API named - Setup all devices' host aliases -For DR )software should disable all DPOD log targets, update DPOD host aliases and re-enable all log targets in all DataPower devices. This is done by invoking a REST API call to DPOD.
    (See "refreshAgents" API under Devices REST API).

DPOD will be available in the following way:

  • Although the passive secondary DPOD node has a different IP address, all the DataPower appliances will still be able to access it since their internal host alias aliases pointing to DPOD will be replaced . (Step step 5 above).
  • As all DataPower appliances retain the same IP addresses - the secondary DPOD node that was just made active can continue to sample them.
  • Although the passive DPOD secondary DPOD node has a different IP address, all users can access DPOD’s web console because its DNS name has been changed . (Step or it is behind an NLB (step 4 above).

Scenario C

...

: Active/Standby –

...

Two separate DPOD installations with no shared storage

Assumptions:

  1. The customer has DataPower appliances in deployed using either an Active/Passive or Active/Standby Stand-by configuration. All DataPower appliances in any configuration must of these configurations have unique IP addresses. 
  2. Two DPODs DPOD nodes are installed (both are >= v1requires DPOD version 1.0.5 +), one operates as the active machine Active node and the other one as standbyStandby. After installing the standby secondary DPOD device, makeStandby node, it must be configured to run in Standby state. See "makeStandby" API under DR REST API was executed to designate it as such.
  3. Both DPODs' environment name should be identical, the environment name was entered during the DPOD software deployment, the environment name is visible on DPOD nodes should have the same environment name. The environment name is set by the customer during DPOD software deployment or during upgrade, and is visible in the top navigation bar (circled in red in the image below):



  4. When the DPOD installation node is in a DR Standby mode, a message label is shown ext displayed next to the environment name in the Web Console. A refresh (F5) may be required to reflect recent changes if the makeStandby API has just been executed, or when the DPOD status has changed from active to standby or vice versa. See the image below:



  5. Both DPODs are configured separately As both nodes are up, no configuration or data replication can exist in this scenario. The customer is expected to configure each DPOD node as a standalone including all system parameters, security groups / roles/ LDAP parameters/ Certificates, custom reports and reports scheduling, custom alerts and alerts scheduling, maintenance plan and user preferences. DPOD is not performing any configuration synchronization.
  6. Importantly, the customer must add DataPower instances to each installation in order to monitor all DataPower Devices (active, standby and passive) - in . Starting with DPOD v1.0.5 a new REST API was exposed may be utilized to add a new DataPower device to DPOD and it can be consumed using a script without using the UI (see the Devices REST API)
    As both servers are up no replication can exist in this scenario. The customer added devices must add DataPower instances to the standby DPOD node and set the agents for each device from the Device Management page in the web console (or by using the Devices REST API). Setting up the devices in the standby DPOD node will not make any changes to the monitored DataPower devices (no log targets, host aliases or configuration changes will be made).
  7. All DPOD network services (NTP, SMTP, LDAP etc.) have the same IP address (otherwise a post configuration script is required to be run by the DR software)addresses.
  8. The customer has a 3rd party software tool or scripts that can:
    1. Identify unavailability of the primary DPOD
    server/s
    1. node.
    Launch the passive DPOD servers using a different IP address to the active one
    1. Change the state of the secondary node (that is in standby state) to Active state
  9. The standby DPOD server node can still be online as disk replication is not required.

...

  1. This scenario will not provide High Availability for data. To load data from the Primary node, the customer is required to restore backups taken from primary nodes.
  2. During state transition of the Secondary DPOD from Active back to Standby there might be some data loss.

During a disaster:

  1. The customer's DR software should Identify a failure in DPOD failure primary node (e.g. by pinging an access IP, User Interface sampling the user interface URL or both).
  2. The customer's DR Software software should enable the stand-by standby DPOD installation node by calling the "standbyToActive" API (see see DR REST API referenceAPI). This call API will point DPOD's log targets and host aliases on of the monitored devices to the standby machinenode and enable most timer based services (e.g. Reports, Alerts) on the secondary nodes.
  3. The customer's DR Software must software should change the DNS name for the DPOD servernode's web console to reference an actual IP address or use an NLB in front of both DPOD web consoleconsoles.

DPOD will be available in the following way:

  • Although the passive secondary DPOD node has a different IP address, all the DataPower appliances will still be able to access it since their internal host alias aliases pointing to DPOD will be replaced . (Step step 2 above).
  • As all DataPower appliances retain the same IP addresses - DPOD can continue to sample them.
  • Although the

    passive

    secondary DPOD node has a different IP address, all users can access DPOD’s web console because its DNS name has been changed

    . (Step

    or it is behind an NLB (step 3 above).


    Note
    Note -

    All Data from the originally Active DPOD will not be available! 


 In a "Return to Normal" scenario:

  1. Right after re-launching the Active installation, execute the following REST API call: standbyToInactive (see DR REST API reference) to disable the Stand-by installation.Execute a call to the following REST API to re-enable the Active installation: activeBackToActive (see DR REST API reference) - this call primary node, make a call to the "standbyToInactive" API (see DR REST API) to disable the standby node.
  2. Call the "activeBackToActive" API (see DR REST API) to re-enable the primary node. This will point DPOD's log targets and host aliases on the monitored devices back to the active primary DPOD machinenode.
  3. The customer's DR Software must software should change the DNS name for the DPOD servernode's web console to reference an actual IP address or use an NLB in front of both DPOD web consoles.
  4. During state transition of the Primary node from Active to Standby there might be some data loss.

Scenario D: Limited Active/Active – Two separate DPOD installations with no shared storage

Assumptions:

  1. The customer has DataPower appliances deployed using either an Active/Passive, Active/Active or Active/Stand-by configuration. All DataPower appliances in any of these configurations have unique IP addresses.
  2. Two DPOD nodes are installed (both are v1.0.5+), both running in Active state. 
  3. Both DPOD nodes must have different environment names. The environment name is set by the customer during DPOD software deployment, and is visible at the top navigation bar .
  4. Both DPOD nodes are configured separately to monitor all DataPower Devices (active, standby and passive). Starting with DPOD v1.0.5 a new REST API may be utilized to add a new DataPower device to DPOD without using the UI (see Devices REST API). As both nodes are up, no configuration replication can exist in this scenario.
  5. As both nodes are up, no data replication can exist in this scenario. The customer is expected to configure each DPOD node as a standalone deployment, including all system parameters, security groups / roles/ LDAP parameters / Certificates, custom reports and reports scheduling, custom alerts and alerts scheduling, maintenance plan and user preferences. DPOD is not performing any configuration synchronization.
  6. Importantly, the customer must add DataPower instances to each installation to monitor all DataPower Devices (active, standby and passive). Starting with DPOD v1.0.5, a new REST API may be utilized to add a new DataPower device to DPOD without using the UI (see Devices REST API). The customer must add DataPower instances to the standby DPOD node and set the agents for each device from the Device Management page in the web console (or by using the Devices REST API). Setting up the devices in the standby DPOD node will not make any changes to the monitored DataPower devices (no log targets, host aliases or configuration changes will be made).
  7. All DPOD network services (NTP, SMTP, LDAP etc.) have the same IP addresses.
  8. The customer added DataPower devices to the standby DPOD node and set the agents for each device from the Device Management page in the web console (or by using the Devices REST API). The customer is expected to replicate all configurations and definitions for each installation. DPOD replicates neither data nor configurations/definitions.
  9. Important!Since the two installations are completely independent and no data is replicated - data inconsistency may follow, as one may capture information while the other is in Down state for maintenance or even started in different time. This might affect reports and alerts.
  10. Important! - Each DPOD installation will create 2 log targets for each domain. If one DataPower is connected to 2 DPODs - then for each domain you will need 4 log targets. As DataPower have a limitation of ~1000 log targets starting FW 7.6, the customer must take care to not reach the log targets limit.
  11. All logs and information will be sent twice over the network thus network bandwidth will be doubled !

During a disaster:

  1. No action is required. The DataPower instance will push data to both instances.

DPOD will be available in the following way: 

  • The active node will continue to operate as it was operating before.
  • All users can access DPOD’s web console because its DNS name has been changed or it is behind an NLB as it was accessible before the disaster.
  • Note - Some Data from the originally Active DPOD will not be available 

 In a "Return to Normal" scenario:

  1. No action is required. The DataPower instance will push data to both instance
  2. The data gathered throughout the disaster period can not be synced back to the recovered node 

Backups

 To improve product recovery, an administrator should perform regular backups as described in the backup section.