IBM DataPower Operations Dashboard v1.0.9.0

A newer version of this product documentation is available.

You are viewing an older version. View latest at IBM DPOD Documentation.

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

TODO - diagram

Overview - TODO

The remote collector deployment should assist in 2 scenarios: 

  • Data should be collected across several deployments but a consolidate single view is required (only one Local nodes is required).
  • When a Local Node is reaching a CPU limit and an offload of work is required (can offload up to 20% CPU in high load).

In order to setup a new Remote Collector server you will need to install another new DPOD server based on the prerequisites below. The Node that will contain the Data and the console will be called "Local Node" and the second installation (contains only the Syslog and WS-M agent) will be called "remote collector".

Prerequisites

  1. The DPOD cell manager and cell FCM must be with the same version (minimum version is v1.0.9.0 )
  2. DPOD ce ll manager can be both "Appliance Mode" or "Non Appliance Mode" installation with "medium" architecture type as detailed in the Hardware and Software Requirements. The manager server can be both virtual or physical.
  3. DPOD cell member (FCM) should be "Non appliance Mode" installation with "High_20dv with High Load" architecture type as detailed in the Hardware and Software Requirements
  4. Each cluster component (manager / FCM ) should have two network interfaces :
    1. External interface - for DPOD users to access UI and for communication between DPOD and Monitored Gateways.
    2. Internal Interface - for internal DPOD components communication (should be 10GB Ethernet interface, for more information see configuring FCM)
  5. Each installation will requires some different ports to be opened in the firewall - see table 1



table 1

From

To

Ports (Defaults)

Protocol

Usage

Cell Manager DPOD Appliance  

Each Monitored Device

5550 (TCP)

HTTP/S

Monitored Device administration management interface

Cell Manager DPOD Appliance  

DNS Server

TCP and UDP 53

DNS

DNS services. Static IP address may be used.

Cell Manager DPOD Appliance  

NTP Server

123 (UDP)

NTP

Time synchronization

Cell Manager DPOD Appliance  

Organizational mail server

25 (TCP)

SMTP

Send reports by email

Cell Manager DPOD Appliance  

LDAP

TCP 389 / 636 (SSL).

TCP 3268 / 3269 (SSL)

LDAP

Authentication & authorization. Can be over SSL

Cell Manager DPOD Appliance  Each of the Cell Member DPOD Appliance9300-9305 (TCP)ElasticsearchElasticsearch Communication (data + management)

NTP Server

Cell Manager DPOD Appliance  

123 (UDP)

NTP

Time synchronization

Each Monitored Device

Cell Manager DPOD Appliance  

60000-60003 (TCP)

TCP

SYSLOG Data

Each Monitored Device

Cell Manager DPOD Appliance  

60020-60023 (TCP)

HTTP/S

WS-M Payloads

FROM Users IPs         

Cell Manager DPOD Appliance  

443 (TCP)

HTTP/S

Access to with IBM DataPower Operations Dashboard Console

FROM Admins IPs         

Cell Manager DPOD Appliance  

22 (TCP)

TCP

SSH

Cell Member DPOD Appliance   Cell Manager DPOD Appliance  9200, 9300-9400ElasticsearchElasticsearch Communication (data + management)

Cell Member DPOD Appliance   

DNS Server

TCP and UDP 53

DNS

DNS services

Cell Member DPOD Appliance   

NTP Server

123 (UDP)

NTP

Time synchronization

NTP Server

Cell Member DPOD Appliance   

123 (UDP)

NTP

Time synchronization

Each Monitored Device

Cell Member DPOD Appliance   

60000-60003 (TCP)

TCP

SYSLOG Data

Each Monitored Device

Cell Member DPOD Appliance   

60020-60023 (TCP)

HTTP/S

WS-M Payloads

FROM Admins IPs         

Cell Member DPOD Appliance   

22 (TCP)

TCP

SSH


Manager Installation

DPOD cell manager can be both "Appliance Mode" or "Non Appliance Mode" installation with "medium" architecture type as detailed in the Hardware and Software Requirements. The manager server can be both virtual or physical. 


As described on the prerequisites section the cell topology requires two network interfaces . when installing the cell manager (the standard DPDO installation before federating to cell) user will be prompt to choose the ip address for the UI console, this should be the "External Interface"



Federate Cluster Member Installation

The following section will describe the installation process of a single Federated Cluster Member (FCM). User should repeat the procedure for every FCM installation.

Prerequisites

  • DPOD cell member (FCM) should be "Non Appliance Mode" installation with "High_20dv with High Load" architecture type as detailed in the Hardware and Software Requirements.
  • In addition to the "Non Applianc Mode" software requirements user should Install the following software packages (RPM) :
    • iptables
    • iptables-services
    • numactl

Installation

DPOD installation

Install DPOD "Non Appliance Mode"  as described in the following installation procedure.


As described on the prerequisites section the cell topology requires two network interfaces . when installing the cell member (the standard DPDO installation before federating to cell) user will be prompt to choose the ip address for the UI console.
Although cell member does not have UI service user should choose the the "External Interface".


After the DPOD installation is complete user should execute the following operation system performance optimization script.

/app/scripts/tune-os-parameters.sh


User should reboot the server for the new performance optimization should take effect.

Prepare Cell Member for Federation

The cell member is usually "bare metal" server with NVMe disks in order to maximize server throughput.

Each of the Store's logical node (service) will be bound to a specific physical processor , disks and memory (using NUMA technology → Non-uniform memory access ).

The default cell member configuration assume 6 NVMe disks which will serve 3 Store data nodes )2 disks per node)

The following OS mount points should be configured by the user before federating the DPOD installation to "cell member".

We highly recommend the use of LVM (Logical volume Manager) to allow "flexible" storage for future storage needs .


note - colored table cells should be completed by the user based on his specific hardware.


Store Nodemount point pathDisk SerialDisk PathCPU No
2/data2


2/data22


3/data3


3/data33


4/data4


4/data44



Examples for Mount Points and Disk Configurations



Store Nodemount point pathDisk SerialDisk PathCPU No
2/data2
/dev/nvme0n11
2/data22
/dev/nvme1n11
3/data3
/dev/nvme2n12
3/data33
/dev/nvme3n12
4/data4
/dev/nvme4n13
4/data44
/dev/nvme5n13





Cell Member Federation

In order to federate and configure the cell member run the following script on in the cell manager once per cell member.

/app/scripts/configure_federated_cluster_manager.sh -a <internal IP address of the cell member> -g <external IP address of the cell member>
For example: /app/scripts/configure_federated_cluster_manager.sh -a 172.18.100.34 -g 172.17.100.33








In order to configure the local node and remote collector(s), run the following script in the local node once per remote collector .

configure_local_node.sh -a <IP address of the remote collector>
For example: configure_local_node.sh -a 192.168.0.5

The script will configure both the local node and remote collector.
Run this script once for each remote collector that you want to add - e.g. if you want to add two remote collectors, run the script twice (in the local node), first time with the IP address of the first remote collector, and second time with the IP of the second remote collector.

Optional parameters:

configure_local_node.sh -a <IP address of the remote collector> -s <initial syslog agents port> -w <initial WSM agents port>
For example: configure_local_node.sh -a 192.168.0.5 -s 70000 -w 70050

The defaults are port 60000 for the initial syslog agents port and 60020 for the initial WSM agents port

Output

Example for a successful execution - note that the script writes two log file, one in the local node and one in the remote collector, the log file names are mentioned in the script's output.

Example for a failed execution, you will need to check the log file for further information.
in case of a failure, the script will try to rollback the configuration changes it made, so you can try to fix the problem and run it again.


After a successful execution, you will be able to see the new remote collectors in the Manage → System → Nodes page,
For example, if we added two remote collectors:

Also, the new agents will be shown in the agents in the Manage → Internal Health → Agents page.
For example, we have one local node with two agents and two remote collectors with two agents each, the page will show six agents:

Configure The Monitored Device to Remote Collector's Agents

It is possible to configure entire monitored device to remote collector's agent or just a specific domain.

To configure monitored device / specific domain please follow instructions on Adding Monitored Devices


Manual Setup Steps

We recommend using the script described in the previous section.
There is no need to take any manual steps if you already run the script.

  1. The following communication and ports are used in a remote collector deployment scenario (table 1). Perform the following commands to accomplish this task on each DPOD local firewall:

    Run in Local Node -
    Change the XXXX to the IP of the Remote Collector

    iptables -I INPUT -p tcp -s XXXX/24 --dport 9300:9309 -j ACCEPT
    service iptables save
    service iptables restart
    

    After running the commands, run the following command and search the output for two entries showing port 9300 (shown in red in the below screenshot)

    iptables -L -n




    table 1

    From

    To

    Ports (Defaults)

    Protocol

    Usage

    Local Node DPOD Appliance  

    Each Monitored Device

    5550 (TCP)

    HTTP/S

    Monitored Device administration management interface

    Local Node DPOD Appliance 

    DNS Server

    TCP and UDP 53

    DNS

    DNS services

    Local Node DPOD Appliance  

    NTP Server

    123 (UDP)

    NTP

    Time synchronization

    Local Node DPOD Appliance  

    Organizational mail server

    25 (TCP)

    SMTP

    Send reports by email

    Local Node DPOD Appliance  

    LDAP

    TCP 389 / 636 (SSL).

    TCP 3268 / 3269 (SSL)

    LDAP

    Authentication & authorization. Can be over SSL

    NTP Server

    Local Node DPOD Appliance  

    123 (UDP)

    NTP

    Time synchronization

    Each Monitored Device

    Local Node DPOD Appliance  

    60000-60009 (TCP)

    TCP

    SYSLOG Data

    Each Monitored Device

    Local Node DPOD Appliance  

    60020-60029 (TCP)

    HTTP/S

    WS-M Payloads

    FROM Users IPs         

    Local Node DPOD Appliance  

    443 (TCP)

    HTTP/S

    Access to with IBM DataPower Operations Dashboard Console

    FROM Admins IPs         

    Local Node DPOD Appliance  

    22 (TCP)

    TCP

    SSH

    Remote Collector DPOD Appliance  

    Each Monitored Device

    5550 (TCP)

    HTTP/S

    Monitored Device administration management interface

    Remote Collector DPOD Appliance 

    DNS Server

    TCP and UDP 53

    DNS

    DNS services

    Remote Collector DPOD Appliance  

    NTP Server

    123 (UDP)

    NTP

    Time synchronization

    Remote Collector DPOD Appliance  

    Organizational mail server

    25 (TCP)

    SMTP

    Send reports by email

    Remote Collector DPOD Appliance  

    LDAP

    TCP 389 / 636 (SSL).

    TCP 3268 / 3269 (SSL)

    LDAP

    Authentication & authorization. Can be over SSL

    NTP Server

    Remote Collector DPOD Appliance  

    123 (UDP)

    NTP

    Time synchronization

    Each Monitored Device

    Remote Collector DPOD Appliance  

    60000-60009 (TCP)

    TCP

    SYSLOG Data

    Each Monitored Device

    Remote Collector DPOD Appliance  

    60020-60029 (TCP)

    HTTP/S

    WS-M Payloads

    FROM Users IPs         

    Remote Collector DPOD Appliance  

    443 (TCP)

    HTTP/S

    Access to with IBM DataPower Operations Dashboard Console

    FROM Admins IPs         

    Remote Collector DPOD Appliance  

    22 (TCP)

    TCP

    SSH



  2. From the Local Node's UI, go to the Manage menu, select "Nodes" under "System" and click "Edit"



    Enter the IP address of the Remote Collector device and click "Update", you can leave the "Agents DNS Address" empty 



  3.  In the Local Node
    Connect to the Local Node DPOD via ssh as root user (using putty or any other ssh client)
    Using the Command Line Interface choose option 2 - "Stop All", and wait until all the services are stopped, this may take a few minutes to complete.


  4. In the Local Node
    Using putty or any other ssh client, issue the following command: 

    sed -i -e "s/^SERVICES_SIXTH_GROUP=\".*MonTier-SyslogAgent-1 MonTier-HK-WdpServiceResources MonTier-HK-WdpDeviceResources/SERVICES_SIXTH_GROUP=\"MonTier-HK-WdpServiceResources MonTier-HK-WdpDeviceResources/g" /etc/sysconfig/MonTier
  5. In the Local Node
    Using putty or any other ssh client, issue the following command: 

    mv /etc/init.d/MonTier-SyslogAgent-1 /etc/init.d/Disabled-MonTier-SyslogAgent-1
    mv /etc/init.d/MonTier-SyslogAgent-2 /etc/init.d/Disabled-MonTier-SyslogAgent-2
    mv /etc/init.d/MonTier-SyslogAgent-3 /etc/init.d/Disabled-MonTier-SyslogAgent-3
    mv /etc/init.d/MonTier-SyslogAgent-4 /etc/init.d/Disabled-MonTier-SyslogAgent-4
    mv /etc/init.d/MonTier-SyslogAgent-5 /etc/init.d/Disabled-MonTier-SyslogAgent-5
    mv /etc/init.d/MonTier-SyslogAgent-6 /etc/init.d/Disabled-MonTier-SyslogAgent-6
    mv /etc/init.d/MonTier-SyslogAgent-7 /etc/init.d/Disabled-MonTier-SyslogAgent-7
    mv /etc/init.d/MonTier-SyslogAgent-8 /etc/init.d/Disabled-MonTier-SyslogAgent-8
    mv /etc/init.d/MonTier-SyslogAgent-9 /etc/init.d/Disabled-MonTier-SyslogAgent-9
    mv /etc/init.d/MonTier-SyslogAgent-10 /etc/init.d/Disabled-MonTier-SyslogAgent-10
    
    
    mv /etc/init.d/MonTier-WsmAgent-1 /etc/init.d/Disabled-MonTier-WsmAgent-1
    mv /etc/init.d/MonTier-WsmAgent-2 /etc/init.d/Disabled-MonTier-WsmAgent-2
    mv /etc/init.d/MonTier-WsmAgent-3 /etc/init.d/Disabled-MonTier-WsmAgent-3
    mv /etc/init.d/MonTier-WsmAgent-4 /etc/init.d/Disabled-MonTier-WsmAgent-4
    mv /etc/init.d/MonTier-WsmAgent-5 /etc/init.d/Disabled-MonTier-WsmAgent-5
    

    Note: some errors might appear for services that are not exists in your specific deployment architecture type - for example "mv: cannot stat ‘/etc/init.d/Disabled-MonTier-SyslogAgent-10’: No such file or directory"


  6.  In the Local Node
    Using any text editor (like vi), edit /etc/hosts files (e.g. vi /etc/hosts)
    Change the following entries:
    montier-es from 127.0.0.1 to the IP of the Local node device
    montier-syslog and montier-wsm to the IP of the remote collector device

    you should save the changes when exit (e.g wq)

  7. In the Local Node
    Using the Command Line Interface -  Select option 1 "Start All", this may take a few minutes to complete



  8. Connect to the Remote Collector DPOD via ssh as root user (using putty or any other ssh client)
    Using the Command Line Interface choose option 2 - "Stop All", and wait until all the services are stopped, this may take a few minutes to complete.


  9. In the Remote Collector
    Using putty or any other ssh client, issue the following commands:

    mv /etc/init.d/MonTier-es-raw-trans-Node-1 /etc/init.d/Disabled-MonTier-es-raw-trans-Node-1
    mv /etc/init.d/MonTier-es-raw-trans-Node-2 /etc/init.d/Disabled-MonTier-es-raw-trans-Node-2
    mv /etc/init.d/MonTier-es-raw-trans-Node-3 /etc/init.d/Disabled-MonTier-es-raw-trans-Node-3
    mv /etc/init.d/MonTier-es-raw-trans-Node-4 /etc/init.d/Disabled-MonTier-es-raw-trans-Node-4
    
    mv /etc/init.d/MonTier-Derby /etc/init.d/Disabled-MonTier-Derby
    
    mv /etc/init.d/MonTier-HK-ESRetention /etc/init.d/Disabled-MonTier-HK-ESRetention
    
    mv /etc/init.d/MonTier-HK-SyslogKeepalive /etc/init.d/Disabled-MonTier-HK-SyslogKeepalive
    mv /etc/init.d/MonTier-HK-WsmKeepalive /etc/init.d/Disabled-MonTier-HK-WsmKeepalive
    
    mv /etc/init.d/MonTier-HK-WdpDeviceResources /etc/init.d/Disabled-MonTier-HK-WdpDeviceResources
    mv /etc/init.d/MonTier-HK-WdpServiceResources /etc/init.d/Disabled-MonTier-HK-WdpServiceResources
    
    mv /etc/init.d/MonTier-Reports /etc/init.d/Disabled-MonTier-Reports
    
    mv /etc/init.d/MonTier-UI /etc/init.d/Disabled-MonTier-UI
    
    sed -i -e "s/^SERVICES_FIRST_GROUP=\".*/SERVICES_FIRST_GROUP=\"\"/g" /etc/sysconfig/MonTier
    sed -i -e "s/^SERVICES_SECOND_GROUP=\".*/SERVICES_SECOND_GROUP=\"\"/g" /etc/sysconfig/MonTier
    sed -i -e "s/^SERVICES_THIRD_GROUP=\".*/SERVICES_THIRD_GROUP=\"\"/g" /etc/sysconfig/MonTier
    sed -i -e "s/\MonTier-HK-WdpServiceResources MonTier-HK-WdpDeviceResources//g" /etc/sysconfig/MonTier
    sed -i -e "s/^SERVICES_SEVENTH_GROUP=\".*/SERVICES_SEVENTH_GROUP=\"\"/g" /etc/sysconfig/MonTier

    Note: some errors might appear for services that are not exists in your specific deployment architecture type - for example "mv: cannot stat ‘/etc/init.d/MonTier-es-raw-trans-Node-4’: No such file or directory"

  10. In the Remote Collector
    Using any text editor (like vi), edit /etc/hosts files (e.g. vi /etc/hosts)
    Change the following entries:
    montier-es from 127.0.0.1 to the ip of the Local Node device


  11. In the Remote Collector
    Using the Command Line Interface choose option 1 - "Start All", and wait until all the services are stopped, this may take a few minutes to complete.


  12. Verify in the console in Management → Internal health → Agents that all agents are in green state.

  13. Run the following two scripts, you will need to obtain them from IBM support:
    in the Local Node - configure_local_node.sh
    in the Remote Collector - configure_remote_collector.sh
  14. In the Local Node - !! Only if DPOD was already attached to DataPower Gateways !!
    you will need to reconfigure again all the the attached device.

After the setup is complete - DPOD's web console will not longer be available for the Remote Collector, The only way to connect to the Remote Collector will be via ssh client


  • No labels