Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The Operator Lifecycle Manager (OLM) provides a framework for installing, updating, and managing the lifecycle of operators and their services.

Table of Contents
minLevel1
maxLevel1
outlinefalse
typelist
printablefalse

Prerequisites

The DataPower Operations Dashboard Cloud Agent Operator currently supports installation via OLM, see Prerequisites for supported versions.

Installation Mode

When installing an operator via OLM, there are two options for the Installation Mode:

  • All namespaces on the cluster: AllNamespaces (aka cluster scope)

  • A specific namespace on the cluster: OwnNamespace (aka namespace scope)

In AllNamespaces mode, the Operator will use a ClusterRole and ClusterRoleBinding and using that will have cluster-wide scope to manage DataPower Operations Dashboard Cloud Agent resources across all namespaces. In OwnNamespace mode, the operator will use a Role and RoleBinding as its primary access (limited to the namespace it's installed in), with a limited set of ClusterRole permissions (see Cluster-scope permissions).

Note

Do not install the Operator in more than one mode. If AllNamespaces is chosen, do not subsequently install a second instance in OwnNamespace mode.

Available Versions

DPOD Version

Cloud Agent Operator Version

1.0.19.0

0.1.0

Loading Images to a Container Registry

DPOD Cloud Agent images are currently available for download from Passport Advantage (PPA) and need to be loaded to a container registry, so they can be pulled by your Kubernetes cluster. The container registry may be any external registry which is accessible to the cluster, or the cluster’s internal registry.

This is the list of the image file names (as available on PPA) and the images names and tags:

...

Image File Name

...

Image Name and Tag

...

dpod-ca-operator-catalog-<DPOD-VERSION>.tgz

...

dpod-cloud-agent-operator-catalog:<OPERATOR-VERSION>-amd64

...

dpod-ca-operator-bundle-<DPOD-VERSION>.tgz

...

dpod-cloud-agent-operator-bundle:<OPERATOR-VERSION>-amd64

...

dpod-ca-operator-<DPOD-VERSION>.tgz

...

dpod-cloud-agent-operator:<OPERATOR-VERSION>-amd64

...

dpod-ca-api-proxy-<DPOD-VERSION>.tgz

...

dpod-cloud-agent-api-proxy:<DPOD-VERSION>-amd64

...

dpod-ca-http-ingester-<DPOD-VERSION>.tgz

...

dpod-cloud-agent-http-ingester:<DPOD-VERSION>-amd64

...

dpod-ca-manager-<DPOD-VERSION>.tgz

...

dpod-cloud-agent-manager:<DPOD-VERSION>-amd64

...

dpod-ca-messaging-broker-<DPOD-VERSION>.tgz

...

dpod-cloud-agent-messaging-broker:<DPOD-VERSION>-amd64

...

dpod-ca-syslog-ingester-<DPOD-VERSION>.tgz

...

dpod-cloud-agent-syslog-ingester:<DPOD-VERSION>-amd64

...

Installing

...

The skopeo syntax is as follows:

Code Block
skopeo copy --all --dest-creds=<destination container registry credentials if needed> docker-archive:<image file full path> \
    docker://<destination container registry path>/<image name>:<image tag>

...

the

...

  1. Set variables with the source, destination, versions, etc.:

    Code Block
    DPOD_CLOUD_AGENT_NAMESPACE="dpod-cloud-agent"
    CONTAINER_REGISTRY_EXTERNAL_URL="default-route-openshift-image-registry.apps.ocp4.mycluster.com"
    CONTAINER_REGISTRY_INTERNAL_URL="image-registry.openshift-image-registry.apps.ocp4.mycluster.com"
    DPOD_CLOUD_AGENT_VERSION="1.0.19.0"
    DPOD_CLOUD_AGENT_OPERATOR_VERSION="0.1.0"
    DPOD_CLOUD_AGENT_IMAGE_TAG=${DPOD_CLOUD_AGENT_VERSION}-amd64"
    DPOD_CLOUD_AGENT_OPERATOR_IMAGE_TAG="${DPOD_CLOUD_AGENT_OPERATOR_VERSION}-amd64"
    IMAGES_DIR="/tmp"
    USER_ID="admin"
  2. Load the operator catalog and bundle images to openshift-marketplace namespace:

    Code Block
    skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-operator-catalog-${DPOD_CLOUD_AGENT_VERSION}.tgz \
        docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/openshift-marketplace/dpod-cloud-agent-operator-catalog:${DPOD_CLOUD_AGENT_OPERATOR_IMAGE_TAG}
    	
    skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-operator-bundle-${DPOD_CLOUD_AGENT_VERSION}.tgz \
        docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/openshift-marketplace/dpod-cloud-agent-operator-bundle:${DPOD_CLOUD_AGENT_OPERATOR_IMAGE_TAG}
  3. Load the operator image to the DPOD Cloud Agent namespace (for namespace scope deployment) or to the openshift-operators namespace (for cluster scope deployment):

    Code Block
    # if Installation Mode is "AllNamespaces" (cluster scope), use openshift-operators
    # if Installation Mode is "OwnNamespace" (namespace scope), use ${DPOD_CLOUD_AGENT_NAMESPACE}
    DPOD_CLOUD_AGENT_OPERATOR_NAMESPACE=${DPOD_CLOUD_AGENT_NAMESPACE}
    
    skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-operator-${DPOD_CLOUD_AGENT_VERSION}.tgz \
        docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/${DPOD_CLOUD_AGENT_OPERATOR_NAMESPACE}/dpod-cloud-agent-operator:${DPOD_CLOUD_AGENT_OPERATOR_IMAGE_TAG}
  4. Load application images to the DPOD Cloud Agent namespace:

    Code Block
    skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-api-proxy-${DPOD_CLOUD_AGENT_VERSION}.tgz \
        docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/${DPOD_CLOUD_AGENT_NAMESPACE}/dpod-cloud-agent-api-proxy:${DPOD_CLOUD_AGENT_IMAGE_TAG}
    
    skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-http-ingester-${DPOD_CLOUD_AGENT_VERSION}.tgz \
        docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/${DPOD_CLOUD_AGENT_NAMESPACE}/dpod-cloud-agent-http-ingester:${DPOD_CLOUD_AGENT_IMAGE_TAG}
    
    skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-manager-${DPOD_CLOUD_AGENT_VERSION}.tgz \
        docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/${DPOD_CLOUD_AGENT_NAMESPACE}/dpod-cloud-agent-manager:${DPOD_CLOUD_AGENT_IMAGE_TAG}
    
    skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-messaging-broker-${DPOD_CLOUD_AGENT_VERSION}.tgz \
        docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/${DPOD_CLOUD_AGENT_NAMESPACE}/dpod-cloud-agent-messaging-broker:${DPOD_CLOUD_AGENT_IMAGE_TAG}
    
    skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-syslog-ingester-${DPOD_CLOUD_AGENT_VERSION}.tgz \
        docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/${DPOD_CLOUD_AGENT_NAMESPACE}/dpod-cloud-agent-syslog-ingester:${DPOD_CLOUD_AGENT_IMAGE_TAG}	

Creating / Updating the ImageContentSourcePolicy

The DPOD Cloud Agent operator will deploy containers with images referencing to IBM’s cp.icr.io/cp/dpod container registry. Since the images are currently loaded into your own container registry (and are not still available in the IBM container registry), a mirroring is required to be configured using the ImageContentSourcePolicy resource.

In the following example for OCP, the path of cp.icr.io/cp/dpod is mirrored both by the internal OCP registry namespace dpod-cloud-agent and by a private external registry my-external-registry.com/dpod. You should adjust these values according to the container registry that the images were loaded into.

Code Block
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
  name: dpod-cloud-agent-registry-mirror
spec:
  repositoryDigestMirrors:
    - mirrors:
        - image-registry.openshift-image-registry.svc:5000/dpod-cloud-agent
        - my-external-registry.com/dpod
      source: cp.icr.io/cp/dpod

Without a proper ImageContentSourcePolicy the pods will fail on ImagePullBackOff error when trying to pull the images.

...

CatalogSource

Use the following YAML example to create a CatalogSource for the DPOD Cloud Agent (typically CatalogSources are created in the openshift-marketplace namespace):

...

Do not forget to replace the variables reference (${...}) with the actual values before creating the CatalogSource.

Using the OpenShift Console

To create the CatalogSource resource using the OpenShift Console, use the following steps:

  1. Navigate to the OpenShift Console UI.

  2. In the top-right of the UI, on the header bar, click the Import button (+) to import YAML.

  3. Copy and paste the above YAML example into the editor.

  4. Click the Create button to create the resource.

Using the OCP CLI (oc)

To create this resource using the oc CLI, use the following steps:

  1. Create a YAML file containing the above YAML example.

  2. Use the oc apply command to apply the YAML resource:

    Code Block
     oc apply -f ibm-datapower-operations-dashboard-operator-catalog.yaml

Validating that the CatalogSource is Installed and Ready

To validate that the CatalogSource resource was installed correctly, use the following steps.

Validate that the CatalogSource pod is ready

use the following oc command to get the CatalogSource pod status and verify the status is READY:

Code Block
oc get catalogsource ibm-dpod-cloud-agent-catalog -n openshift-marketplace -o yaml | yq read - "status.connectionState.lastObservedState"

Validate that the CatalogSource was processed into OperatorHub

  1. Navigate to the OpenShift Console UI.

  2. On the left panel, expand the Operators section.

  3. Select OperatorHub.

  4. At the top of the OperatorHub section, enter datapower operations dashboard into the Filter search box.

  5. A tile should be shown titled IBM DataPower Operations Dashboard Cloud Agent.

Installing the DPOD Cloud Agent Operator

To install the DPOD Cloud Agent Operator use the following steps:

Using the OpenShift Console

  1. Use the previous steps to locate the IBM DataPower Operations Dashboard Cloud Agent Operator tile in the OperatorHub UI.

  2. Select the IBM DataPower Operations Dashboard Cloud Agent tile. A panel to the right should appear.

  3. Click the Install button on the right panel.

  4. Under Installation Mode select your desired installation mode.

  5. Select the desired Update Channel.

  6. Select the desired Approval Strategy.

  7. Click the Subscribe button to install the IBM DataPower Operations Dashboard Cloud Agent Operator.

The Approval Strategy is what determines if the IBM DataPower Operations Dashboard Cloud Agent Operator will automatically update when new releases become available within the selected channel. If Automatic is selected, over-the-air updates will occur automatically as they become available. If Manual is selected, an administrator would need to approve each update as it becomes available through OLM.

Using the OCP CLI (oc)

To create the DPOD Cloud Agent Operator subscription using the oc CLI, use the following steps:

...

  1. Create a YAML file containing the above YAML example.

  2. Use the oc apply command to apply the YAML resource.

    Code Block
     oc apply -f ibm-datapower-operations-dashboard-cloud-agent-operator.yaml

DPOD Cloud Agent Network Configuration

The Cloud Agent sends and receives data to / from the DataPower Operations Dashboard instance which is installed outside the Kubernetes cluster.

Currently the DPOD Cloud Agent Operator supports the following methods for exposing the Cloud Agent’s services:

  • NodePort - (default) The Cloud Agent operator will create services with the type of NodePort to expose the services externally to OCP.

  • Custom - The Cloud Agent operator will not create any resources for exposing the services externally to OCP, and it is the user’s responsibility to create, update and delete these resources (e.g.: Ingress controller, LoadBalancer services, etc.). For more information, see the Kubernetes documentation.
    For route, configuration see OCP Route documentation.
    For ingress configuration see Kubernetes Ingress documentation.

Cloud Agent Inbound (ingress) Communication

The Cloud Agent inbound communication includes:

  • Management API invocation generated by DataPower Operations Dashboard to the managercomponent of the Cloud Agent.

  • Kafka communication to the messaging component of the Cloud Agent (messaging brokers).

Manager

The manager component has a number of properties (in the Cloud Agent CR) for controlling the communication:

  • incomingTrafficMethod - The method of exposing the manager to incoming traffic from outside the cluster.
    Available options are: Custom, NodePort, Route (default is Route for OpenShift and NodePort for Kubernetes).

  • externalHost - The external host for accessing the manager from outside the cluster.

  • externalPort - The external port for accessing the manager from outside the cluster (default is 443).

  • incomingTrafficPort - The port for exposing the manager to incoming traffic from outside the cluster (when incomingTrafficMethod is NodePort, default is the value of externalPort).

For more information, see Manager API documentation.

Messaging

The messaging component has a number of properties (in the Cloud Agent CR) for controlling the communication:

  • incomingTrafficMethod - The method of exposing the messaging to incoming traffic from outside the cluster.
    Available options are: Custom, NodePort (default is NodePort).

  • externalHost - The external host for accessing the messaging from outside the cluster. This value will be published by the messaging brokers (Kafka).

  • externalPortStart - The starting external port for accessing the messaging from outside the cluster. The bootstrap endpoint will use this port, and each messaging broker will use a consecutive port (default is 30100).

  • incomingTrafficPortStart - The starting port for exposing the messaging to incoming traffic from outside the cluster (when incomingTrafficMethod is NodePort). The bootstrap endpoint will use this port, and each messaging broker will use a consecutive port (default is the value of externalPortStart).

For more information, see Messaging API documentation.

The Cloud Agent Operator will create the following Kubernetes services for the messaging component:

  • <CR name>-msg-bse-svc - A NodePort service for externally accessing the messaging bootstrap port.

  • <CR name>-msg-dir-svc-<broker number> - A NodePort service for each messaging broker (starting with zero) for external direct access to this broker, which uses port externalPortStart +1 (e.g.: 30101, 30102, etc).

Deploying the DPOD Cloud Agent Instance

In order to deploy the DPOD Cloud Agent instance, a CustomResource should be created.

This is a minimal example of the CustomResource. The complete API is documented in DpodCloudAgent.

Code Block
apiVersion: integration.ibm.com/v1beta1
kind: DpodCloudAgent
metadata:
  namespace: integration
  name: dpod-cloud-agent-prod
spec:
  discovery:
    namespaces:
      - datapower-gateways-ns
  license:
    accept: true
    license: L-GHED-75SD3J
    use: Production
  manager:
    externalHost: dpod-cloud-agent-manager.apps.ocp10.mycluster.com
  messaging:
    externalHost: dpod-cloud-agent-messaging.apps.ocp10.mycluster.com
    storage:
      className: app-storage
  version: 1.0.19.0

Validating the Cloud Agent Instance

Using the OpenShift Console

To validate the CustomResource using the OpenShift Console, use the following steps.

  1. Navigate to your OpenShift Console UI.

  2. Navigate to Installed Operators and choose IBM DataPower Operations Dashboard Cloud Agent.

  3. Click on DpodCloudAgent tab and make sure the new CustomResource is in Running Phase.

Using the OCP CLI (oc)

To validate the CustomResource using the oc CLI, use the following steps.

  1. Execute the following command and make sure the new CustomResource is in PHASE Running:

    Code Block
    # oc get DpodCloudAgent <CR name> -n <Cloud Agent namespace>
    oc get DpodCloudAgent dpod-cloud-agent-prod -n dpod-cloud-agent