IBM DataPower Operations Dashboard v1.0.11.0

A newer version of this product documentation is available.

You are viewing an older version. View latest at IBM DPOD Documentation.

Setup a Remote Collector

Overview

The remote collector deployment should assist in 2 scenarios: 

  • Data should be collected across several deployments but a consolidate single view is required (only one Local nodes is required).
  • When a Local Node is reaching a CPU limit and an offload of work is required (can offload up to 20% CPU in high load).

In order to setup a new Remote Collector server you will need to install another new DPOD server based on the prerequisites below. The Node that will contain the Data and the console will be called "Local Node" and the second installation (contains only the Syslog and WS-M agent) will be called "remote collector".

Prerequisites

  1. Two DPOD installations must be with the same version (minimum version is v1.0.7 )
  2. The remote collector DPOD installations should be configured with the "medium" architecture type as detailed in the Hardware and Software Requirements
  3. Each installation will requires some different ports to be opened in the firewall - see table1
  4. There are no requirements regarding the Environment name of each DPOD installation
  5. The two DPODs need to be able to communicate with each other and with the monitored DataPower devices

Setup steps

In order to configure the local node and remote collector(s), run the following script in the local node once per remote collector .

configure_local_node.sh -a <IP address of the remote collector>
For example: configure_local_node.sh -a 192.168.0.5

The script will configure both the local node and remote collector.
Run this script once for each remote collector that you want to add - e.g. if you want to add two remote collectors, run the script twice (in the local node), first time with the IP address of the first remote collector, and second time with the IP of the second remote collector.

Optional parameters:

configure_local_node.sh -a <IP address of the remote collector> -s <initial syslog agents port> -w <initial WSM agents port>
For example: configure_local_node.sh -a 192.168.0.5 -s 70000 -w 70050

The defaults are port 60000 for the initial syslog agents port and 60020 for the initial WSM agents port

Output

Example for a successful execution - note that the script writes two log file, one in the local node and one in the remote collector, the log file names are mentioned in the script's output.

Example for a failed execution, you will need to check the log file for further information.
in case of a failure, the script will try to rollback the configuration changes it made, so you can try to fix the problem and run it again.


After a successful execution, you will be able to see the new remote collectors in the Manage → System → Nodes page,
For example, if we added two remote collectors:

Also, the new agents will be shown in the agents in the Manage → Internal Health → Agents page.
For example, we have one local node with two agents and two remote collectors with two agents each, the page will show six agents:

Configure The Monitored Device to Remote Collector's Agents

It is possible to configure entire monitored device to remote collector's agent or just a specific domain.

To configure monitored device / specific domain please follow instructions on Adding Monitored Devices


Manual Setup Steps

We recommend using the script described in the previous section.
There is no need to take any manual steps if you already run the script.

  1. The following communication and ports are used in a remote collector deployment scenario (table 1). Perform the following commands to accomplish this task on each DPOD local firewall:

    Run in Local Node -
    Change the XXXX to the IP of the Remote Collector

    iptables -I INPUT -p tcp -s XXXX/24 --dport 9300:9309 -j ACCEPT
    service iptables save
    service iptables restart
    

    After running the commands, run the following command and search the output for two entries showing port 9300 (shown in red in the below screenshot)

    iptables -L -n




    table 1

    From

    To

    Ports (Defaults)

    Protocol

    Usage

    Local Node DPOD Appliance  

    Each Monitored Device

    5550 (TCP)

    HTTP/S

    Monitored Device administration management interface

    Local Node DPOD Appliance 

    DNS Server

    TCP and UDP 53

    DNS

    DNS services

    Local Node DPOD Appliance  

    NTP Server

    123 (UDP)

    NTP

    Time synchronization

    Local Node DPOD Appliance  

    Organizational mail server

    25 (TCP)

    SMTP

    Send reports by email

    Local Node DPOD Appliance  

    LDAP

    TCP 389 / 636 (SSL).

    TCP 3268 / 3269 (SSL)

    LDAP

    Authentication & authorization. Can be over SSL

    NTP Server

    Local Node DPOD Appliance  

    123 (UDP)

    NTP

    Time synchronization

    Each Monitored Device

    Local Node DPOD Appliance  

    60000-60009 (TCP)

    TCP

    SYSLOG Data

    Each Monitored Device

    Local Node DPOD Appliance  

    60020-60029 (TCP)

    HTTP/S

    WS-M Payloads

    FROM Users IPs         

    Local Node DPOD Appliance  

    443 (TCP)

    HTTP/S

    Access to with IBM DataPower Operations Dashboard Console

    FROM Admins IPs         

    Local Node DPOD Appliance  

    22 (TCP)

    TCP

    SSH

    Remote Collector DPOD Appliance  Local Node DPOD Appliance  9300-9309TCPDPOD's Store communication

    Remote Collector DPOD Appliance  

    Each Monitored Device

    5550 (TCP)

    HTTP/S

    Monitored Device administration management interface

    Remote Collector DPOD Appliance 

    DNS Server

    TCP and UDP 53

    DNS

    DNS services

    Remote Collector DPOD Appliance  

    NTP Server

    123 (UDP)

    NTP

    Time synchronization

    Remote Collector DPOD Appliance  

    Organizational mail server

    25 (TCP)

    SMTP

    Send reports by email

    Remote Collector DPOD Appliance  

    LDAP

    TCP 389 / 636 (SSL).

    TCP 3268 / 3269 (SSL)

    LDAP

    Authentication & authorization. Can be over SSL

    NTP Server

    Remote Collector DPOD Appliance  

    123 (UDP)

    NTP

    Time synchronization

    Each Monitored Device

    Remote Collector DPOD Appliance  

    60000-60009 (TCP)

    TCP

    SYSLOG Data

    Each Monitored Device

    Remote Collector DPOD Appliance  

    60020-60029 (TCP)

    HTTP/S

    WS-M Payloads

    FROM Users IPs         

    Remote Collector DPOD Appliance  

    443 (TCP)

    HTTP/S

    Access to with IBM DataPower Operations Dashboard Console

    FROM Admins IPs         

    Remote Collector DPOD Appliance  

    22 (TCP)

    TCP

    SSH



  2. From the Local Node's UI, go to the Manage menu, select "Nodes" under "System" and click "Edit"



    Enter the IP address of the Remote Collector device and click "Update", you can leave the "Agents DNS Address" empty 



  3.  In the Local Node
    Connect to the Local Node DPOD via ssh as root user (using putty or any other ssh client)
    Using the Command Line Interface choose option 2 - "Stop All", and wait until all the services are stopped, this may take a few minutes to complete.


  4. In the Local Node
    Using putty or any other ssh client, issue the following command: 

    sed -i -e "s/^SERVICES_SIXTH_GROUP=\".*MonTier-SyslogAgent-1 MonTier-HK-WdpServiceResources MonTier-HK-WdpDeviceResources/SERVICES_SIXTH_GROUP=\"MonTier-HK-WdpServiceResources MonTier-HK-WdpDeviceResources/g" /etc/sysconfig/MonTier
  5. In the Local Node
    Using putty or any other ssh client, issue the following command: 

    mv /etc/init.d/MonTier-SyslogAgent-1 /etc/init.d/Disabled-MonTier-SyslogAgent-1
    mv /etc/init.d/MonTier-SyslogAgent-2 /etc/init.d/Disabled-MonTier-SyslogAgent-2
    mv /etc/init.d/MonTier-SyslogAgent-3 /etc/init.d/Disabled-MonTier-SyslogAgent-3
    mv /etc/init.d/MonTier-SyslogAgent-4 /etc/init.d/Disabled-MonTier-SyslogAgent-4
    mv /etc/init.d/MonTier-SyslogAgent-5 /etc/init.d/Disabled-MonTier-SyslogAgent-5
    mv /etc/init.d/MonTier-SyslogAgent-6 /etc/init.d/Disabled-MonTier-SyslogAgent-6
    mv /etc/init.d/MonTier-SyslogAgent-7 /etc/init.d/Disabled-MonTier-SyslogAgent-7
    mv /etc/init.d/MonTier-SyslogAgent-8 /etc/init.d/Disabled-MonTier-SyslogAgent-8
    mv /etc/init.d/MonTier-SyslogAgent-9 /etc/init.d/Disabled-MonTier-SyslogAgent-9
    mv /etc/init.d/MonTier-SyslogAgent-10 /etc/init.d/Disabled-MonTier-SyslogAgent-10
    
    
    mv /etc/init.d/MonTier-WsmAgent-1 /etc/init.d/Disabled-MonTier-WsmAgent-1
    mv /etc/init.d/MonTier-WsmAgent-2 /etc/init.d/Disabled-MonTier-WsmAgent-2
    mv /etc/init.d/MonTier-WsmAgent-3 /etc/init.d/Disabled-MonTier-WsmAgent-3
    mv /etc/init.d/MonTier-WsmAgent-4 /etc/init.d/Disabled-MonTier-WsmAgent-4
    mv /etc/init.d/MonTier-WsmAgent-5 /etc/init.d/Disabled-MonTier-WsmAgent-5
    

    Note: some errors might appear for services that are not exists in your specific deployment architecture type - for example "mv: cannot stat ‘/etc/init.d/Disabled-MonTier-SyslogAgent-10’: No such file or directory"


  6.  In the Local Node
    Using any text editor (like vi), edit /etc/hosts files (e.g. vi /etc/hosts)
    Change the following entries:
    montier-es from 127.0.0.1 to the IP of the Local node device
    montier-syslog and montier-wsm to the IP of the remote collector device

    you should save the changes when exit (e.g wq)

  7. In the Local Node
    Using the Command Line Interface -  Select option 1 "Start All", this may take a few minutes to complete



  8. Connect to the Remote Collector DPOD via ssh as root user (using putty or any other ssh client)
    Using the Command Line Interface choose option 2 - "Stop All", and wait until all the services are stopped, this may take a few minutes to complete.


  9. In the Remote Collector
    Using putty or any other ssh client, issue the following commands:

    mv /etc/init.d/MonTier-es-raw-trans-Node-1 /etc/init.d/Disabled-MonTier-es-raw-trans-Node-1
    mv /etc/init.d/MonTier-es-raw-trans-Node-2 /etc/init.d/Disabled-MonTier-es-raw-trans-Node-2
    mv /etc/init.d/MonTier-es-raw-trans-Node-3 /etc/init.d/Disabled-MonTier-es-raw-trans-Node-3
    mv /etc/init.d/MonTier-es-raw-trans-Node-4 /etc/init.d/Disabled-MonTier-es-raw-trans-Node-4
    
    mv /etc/init.d/MonTier-Derby /etc/init.d/Disabled-MonTier-Derby
    
    mv /etc/init.d/MonTier-HK-ESRetention /etc/init.d/Disabled-MonTier-HK-ESRetention
    
    mv /etc/init.d/MonTier-HK-SyslogKeepalive /etc/init.d/Disabled-MonTier-HK-SyslogKeepalive
    mv /etc/init.d/MonTier-HK-WsmKeepalive /etc/init.d/Disabled-MonTier-HK-WsmKeepalive
    
    mv /etc/init.d/MonTier-HK-WdpDeviceResources /etc/init.d/Disabled-MonTier-HK-WdpDeviceResources
    mv /etc/init.d/MonTier-HK-WdpServiceResources /etc/init.d/Disabled-MonTier-HK-WdpServiceResources
    
    mv /etc/init.d/MonTier-Reports /etc/init.d/Disabled-MonTier-Reports
    
    mv /etc/init.d/MonTier-UI /etc/init.d/Disabled-MonTier-UI
    
    sed -i -e "s/^SERVICES_FIRST_GROUP=\".*/SERVICES_FIRST_GROUP=\"\"/g" /etc/sysconfig/MonTier
    sed -i -e "s/^SERVICES_SECOND_GROUP=\".*/SERVICES_SECOND_GROUP=\"\"/g" /etc/sysconfig/MonTier
    sed -i -e "s/^SERVICES_THIRD_GROUP=\".*/SERVICES_THIRD_GROUP=\"\"/g" /etc/sysconfig/MonTier
    sed -i -e "s/\MonTier-HK-WdpServiceResources MonTier-HK-WdpDeviceResources//g" /etc/sysconfig/MonTier
    sed -i -e "s/^SERVICES_SEVENTH_GROUP=\".*/SERVICES_SEVENTH_GROUP=\"\"/g" /etc/sysconfig/MonTier

    Note: some errors might appear for services that are not exists in your specific deployment architecture type - for example "mv: cannot stat ‘/etc/init.d/MonTier-es-raw-trans-Node-4’: No such file or directory"

  10. In the Remote Collector
    Using any text editor (like vi), edit /etc/hosts files (e.g. vi /etc/hosts)
    Change the following entries:
    montier-es from 127.0.0.1 to the ip of the Local Node device


  11. In the Remote Collector
    Using the Command Line Interface choose option 1 - "Start All", and wait until all the services are stopped, this may take a few minutes to complete.


  12. Verify in the console in Management → Internal health → Agents that all agents are in green state.

  13. Run the following two scripts, you will need to obtain them from IBM support:
    in the Local Node - configure_local_node.sh
    in the Remote Collector - configure_remote_collector.sh
  14. In the Local Node - !! Only if DPOD was already attached to DataPower Gateways !!
    you will need to reconfigure again all the the attached device.

After the setup is complete - DPOD's web console will not longer be available for the Remote Collector, The only way to connect to the Remote Collector will be via ssh client


IBM DataPower Operations Dashboard (DPOD) v1.0.11.0