The Operator Lifecycle Manager (OLM) provides a framework for installing, updating, and managing the lifecycle of operators and their services.
Prerequisites
The DataPower Operations Dashboard Cloud Agent Operator currently supports installation via OLM, see Prerequisites for supported versions.
Installation Mode
When installing an operator via OLM, there are two options for the Installation Mode:
All namespaces on the cluster:
AllNamespaces
(aka cluster scope)A specific namespace on the cluster:
OwnNamespace
(aka namespace scope)
In AllNamespaces
mode, the Operator will use a ClusterRole
and ClusterRoleBinding
and using that will have cluster-wide scope to manage DataPower Operations Dashboard Cloud Agent resources across all namespaces. In OwnNamespace
mode, the operator will use a Role
and RoleBinding
as its primary access (limited to the namespace it's installed in), with a limited set of ClusterRole
permissions (see Cluster-scope permissions).
Do not install the Operator in more than one mode. If AllNamespaces
is chosen, do not subsequently install a second instance in OwnNamespace
mode.
Available Versions
DPOD Version | Cloud Agent Operator Version |
---|---|
1.0.19.0 | 0.1.0 |
Loading Images to a Container Registry
DPOD Cloud Agent images are currently available for download from Passport Advantage (PPA) and need to be loaded to a container registry, so they can be pulled by your Kubernetes cluster. The container registry may be any external registry which is accessible to the cluster, or the cluster’s internal registry.
This is the list of the image file names (as available on PPA) and the images names and tags:
Image File Name | Image Name and Tag |
---|---|
dpod-ca-operator-catalog-<DPOD-VERSION>.tgz | dpod-cloud-agent-operator-catalog:<OPERATOR-VERSION>-amd64 |
dpod-ca-operator-bundle-<DPOD-VERSION>.tgz | dpod-cloud-agent-operator-bundle:<OPERATOR-VERSION>-amd64 |
dpod-ca-operator-<DPOD-VERSION>.tgz | dpod-cloud-agent-operator:<OPERATOR-VERSION>-amd64 |
dpod-ca-api-proxy-<DPOD-VERSION>.tgz | dpod-cloud-agent-api-proxy:<DPOD-VERSION>-amd64 |
dpod-ca-http-ingester-<DPOD-VERSION>.tgz | dpod-cloud-agent-http-ingester:<DPOD-VERSION>-amd64 |
dpod-ca-manager-<DPOD-VERSION>.tgz | dpod-cloud-agent-manager:<DPOD-VERSION>-amd64 |
dpod-ca-messaging-broker-<DPOD-VERSION>.tgz | dpod-cloud-agent-messaging-broker:<DPOD-VERSION>-amd64 |
dpod-ca-syslog-ingester-<DPOD-VERSION>.tgz | dpod-cloud-agent-syslog-ingester:<DPOD-VERSION>-amd64 |
In order to preserve the images digests in the container registry we recommend using the skopeo
utility (available as a package for most distributions: Installing Skopeo).
The skopeo
syntax is as follows:
skopeo copy --all --dest-creds=<destination container registry credentials if needed> docker-archive:<image file full path> \ docker://<destination container registry path>/<image name>:<image tag>
Consider the following example for loading the images to the OpenShift (OCP) internal container registry.
The commands might be a little different, depending on the chosen container registry, namespace name, versions, etc.
Set variables with the source, destination, versions, etc.:
DPOD_CLOUD_AGENT_NAMESPACE="dpod-cloud-agent" CONTAINER_REGISTRY_EXTERNAL_URL="default-route-openshift-image-registry.apps.ocp4.mycluster.com" CONTAINER_REGISTRY_INTERNAL_URL="image-registry.openshift-image-registry.apps.ocp4.mycluster.com" DPOD_CLOUD_AGENT_VERSION="1.0.19.0" DPOD_CLOUD_AGENT_OPERATOR_VERSION="0.1.0" DPOD_CLOUD_AGENT_IMAGE_TAG=${DPOD_CLOUD_AGENT_VERSION}-amd64" DPOD_CLOUD_AGENT_OPERATOR_IMAGE_TAG="${DPOD_CLOUD_AGENT_OPERATOR_VERSION}-amd64" IMAGES_DIR="/tmp" USER_ID="admin"
Load the operator catalog and bundle images to
openshift-marketplace
namespace:skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-operator-catalog-${DPOD_CLOUD_AGENT_VERSION}.tgz \ docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/openshift-marketplace/dpod-cloud-agent-operator-catalog:${DPOD_CLOUD_AGENT_OPERATOR_IMAGE_TAG} skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-operator-bundle-${DPOD_CLOUD_AGENT_VERSION}.tgz \ docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/openshift-marketplace/dpod-cloud-agent-operator-bundle:${DPOD_CLOUD_AGENT_OPERATOR_IMAGE_TAG}
Load the operator image to the DPOD Cloud Agent namespace (for namespace scope deployment) or to the
openshift-operators
namespace (for cluster scope deployment):# if Installation Mode is "AllNamespaces" (cluster scope), use openshift-operators # if Installation Mode is "OwnNamespace" (namespace scope), use ${DPOD_CLOUD_AGENT_NAMESPACE} DPOD_CLOUD_AGENT_OPERATOR_NAMESPACE=${DPOD_CLOUD_AGENT_NAMESPACE} skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-operator-${DPOD_CLOUD_AGENT_VERSION}.tgz \ docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/${DPOD_CLOUD_AGENT_OPERATOR_NAMESPACE}/dpod-cloud-agent-operator:${DPOD_CLOUD_AGENT_OPERATOR_IMAGE_TAG}
Load application images to the DPOD Cloud Agent namespace:
skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-api-proxy-${DPOD_CLOUD_AGENT_VERSION}.tgz \ docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/${DPOD_CLOUD_AGENT_NAMESPACE}/dpod-cloud-agent-api-proxy:${DPOD_CLOUD_AGENT_IMAGE_TAG} skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-http-ingester-${DPOD_CLOUD_AGENT_VERSION}.tgz \ docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/${DPOD_CLOUD_AGENT_NAMESPACE}/dpod-cloud-agent-http-ingester:${DPOD_CLOUD_AGENT_IMAGE_TAG} skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-manager-${DPOD_CLOUD_AGENT_VERSION}.tgz \ docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/${DPOD_CLOUD_AGENT_NAMESPACE}/dpod-cloud-agent-manager:${DPOD_CLOUD_AGENT_IMAGE_TAG} skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-messaging-broker-${DPOD_CLOUD_AGENT_VERSION}.tgz \ docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/${DPOD_CLOUD_AGENT_NAMESPACE}/dpod-cloud-agent-messaging-broker:${DPOD_CLOUD_AGENT_IMAGE_TAG} skopeo copy --all --dest-creds=${USER_ID}:$(oc whoami -t) docker-archive:${IMAGES_DIR}/dpod-ca-syslog-ingester-${DPOD_CLOUD_AGENT_VERSION}.tgz \ docker://${CONTAINER_REGISTRY_EXTERNAL_URL}/${DPOD_CLOUD_AGENT_NAMESPACE}/dpod-cloud-agent-syslog-ingester:${DPOD_CLOUD_AGENT_IMAGE_TAG}
Creating / Updating the ImageContentSourcePolicy
The DPOD Cloud Agent operator will deploy containers with images referencing to IBM’s cp.icr.io/cp/dpod
container registry. Since the images are currently loaded into your own container registry (and are not still available in the IBM container registry), a mirroring is required to be configured using the ImageContentSourcePolicy
resource.
In the following example for OCP, the path of cp.icr.io/cp/dpod
is mirrored both by the internal OCP registry namespace dpod-cloud-agent
and by a private external registry my-external-registry.com/dpod
. You should adjust these values according to the container registry that the images were loaded into.
apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: dpod-cloud-agent-registry-mirror spec: repositoryDigestMirrors: - mirrors: - image-registry.openshift-image-registry.svc:5000/dpod-cloud-agent - my-external-registry.com/dpod source: cp.icr.io/cp/dpod
Without a proper ImageContentSourcePolicy
the pods will fail on ImagePullBackOff
error when trying to pull the images.
Installing the CatalogSource
Use the following YAML example to create a CatalogSource
for the DPOD Cloud Agent (typically CatalogSources
are created in the openshift-marketplace
namespace):
apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-dpod-cloud-agent-catalog namespace: openshift-marketplace spec: displayName: IBM DataPower Operations Dashboard Cloud Agent image: ${CONTAINER_REGISTRY_INTERNAL_URL}/openshift-marketplace/dpod-cloud-agent-operator-catalog:${DPOD_CLOUD_AGENT_OPERATOR_IMAGE_TAG} publisher: IBM sourceType: grpc
Do not forget to replace the variables reference (${...}
) with the actual values before creating the CatalogSource
.
Using the OpenShift Console
To create the CatalogSource
resource using the OpenShift Console, use the following steps:
Navigate to the OpenShift Console UI.
In the top-right of the UI, on the header bar, click the Import button (+) to import YAML.
Copy and paste the above YAML example into the editor.
Click the Create button to create the resource.
Using the OCP CLI (oc
)
To create this resource using the oc
CLI, use the following steps:
Create a YAML file containing the above YAML example.
Use the
oc apply
command to apply the YAML resource:oc apply -f ibm-datapower-operations-dashboard-operator-catalog.yaml
Validating that the CatalogSource is Installed and Ready
To validate that the CatalogSource
resource was installed correctly, use the following steps.
Validate that the CatalogSource pod is ready
use the following oc
command to get the CatalogSource
pod status and verify the status is READY
:
oc get catalogsource ibm-dpod-cloud-agent-catalog -n openshift-marketplace -o yaml | yq read - "status.connectionState.lastObservedState"
Validate that the CatalogSource was processed into OperatorHub
Navigate to the OpenShift Console UI.
On the left panel, expand the Operators section.
Select OperatorHub.
At the top of the OperatorHub section, enter
datapower operations dashboard
into the Filter search box.A tile should be shown titled
IBM DataPower Operations Dashboard Cloud Agent
.
Installing the DPOD Cloud Agent Operator
To install the DPOD Cloud Agent Operator use the following steps:
Using the OpenShift Console
Use the previous steps to locate the IBM DataPower Operations Dashboard Cloud Agent Operator tile in the OperatorHub UI.
Select the
IBM DataPower Operations Dashboard Cloud Agent
tile. A panel to the right should appear.Click the Install button on the right panel.
Under Installation Mode select your desired installation mode.
Select the desired Update Channel.
Select the desired Approval Strategy.
Click the Subscribe button to install the IBM DataPower Operations Dashboard Cloud Agent Operator.
The Approval Strategy is what determines if the IBM DataPower Operations Dashboard Cloud Agent Operator will automatically update when new releases become available within the selected channel. If Automatic
is selected, over-the-air updates will occur automatically as they become available. If Manual
is selected, an administrator would need to approve each update as it becomes available through OLM.
Using the OCP CLI (oc
)
To create the DPOD Cloud Agent Operator subscription using the oc
CLI, use the following steps:
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ibm-dpod-cloud-agent-operator namespace: ${DPOD_CLOUD_AGENT_OPERATOR_NAMESPACE} spec: channel: stable-v0.1 installPlanApproval: Automatic name: dpod-cloud-agent-operator source: ibm-dpod-cloud-agent-catalog sourceNamespace: openshift-marketplace startingCSV: dpod-cloud-agent-operator.v0.1.0
Do not forget to replace the variables reference (${...}
) with the actual values before creating the subscription.
Create a YAML file containing the above YAML example.
Use the
oc apply
command to apply the YAML resource.oc apply -f ibm-datapower-operations-dashboard-cloud-agent-operator.yaml
DPOD Cloud Agent Network Configuration
The Cloud Agent sends and receives data to / from the DataPower Operations Dashboard instance which is installed outside the Kubernetes cluster.
Currently the DPOD Cloud Agent Operator supports the following methods for exposing the Cloud Agent’s services:
NodePort - (default) The Cloud Agent operator will create services with the type of
NodePort
to expose the services externally to OCP.Custom - The Cloud Agent operator will not create any resources for exposing the services externally to OCP, and it is the user’s responsibility to create, update and delete these resources (e.g.: Ingress controller, LoadBalancer services, etc.). For more information, see the Kubernetes documentation.
Forroute
, configuration see OCP Route documentation.
Foringress
configuration see Kubernetes Ingress documentation.
Cloud Agent Inbound (ingress) Communication
The Cloud Agent inbound communication includes:
Management API invocation generated by DataPower Operations Dashboard to the
manager
component of the Cloud Agent.Kafka communication to the
messaging
component of the Cloud Agent (messaging brokers).
Manager
The manager
component has a number of properties (in the Cloud Agent CR) for controlling the communication:
incomingTrafficMethod
- The method of exposing the manager to incoming traffic from outside the cluster.
Available options are:Custom
,NodePort
,Route
(default isRoute
for OpenShift andNodePort
for Kubernetes).externalHost
- The external host for accessing the manager from outside the cluster.externalPort
- The external port for accessing the manager from outside the cluster (default is 443).incomingTrafficPort
- The port for exposing the manager to incoming traffic from outside the cluster (whenincomingTrafficMethod
isNodePort
, default is the value ofexternalPort
).
For more information, see Manager API documentation.
Messaging
The messaging component has a number of properties (in the Cloud Agent CR) for controlling the communication:
incomingTrafficMethod
- The method of exposing the messaging to incoming traffic from outside the cluster.
Available options are:Custom
,NodePort
(default isNodePort
).externalHost
- The external host for accessing the messaging from outside the cluster. This value will be published by the messaging brokers (Kafka).externalPortStart
- The starting external port for accessing the messaging from outside the cluster. The bootstrap endpoint will use this port, and each messaging broker will use a consecutive port (default is 30100).incomingTrafficPortStart
- The starting port for exposing the messaging to incoming traffic from outside the cluster (whenincomingTrafficMethod
isNodePort
). The bootstrap endpoint will use this port, and each messaging broker will use a consecutive port (default is the value ofexternalPortStart
).
For more information, see Messaging API documentation.
The Cloud Agent Operator will create the following Kubernetes services for the messaging component:
<CR name>-msg-bse-svc
- ANodePort
service for externally accessing the messaging bootstrap port.<CR name>-msg-dir-svc-<broker number>
- ANodePort
service for each messaging broker (starting with zero) for external direct access to this broker, which uses portexternalPortStart
+1 (e.g.: 30101, 30102, etc).
Deploying the DPOD Cloud Agent Instance
In order to deploy the DPOD Cloud Agent instance, a CustomResource should be created.
This is a minimal example of the CustomResource. The complete API is documented in DpodCloudAgent.
apiVersion: integration.ibm.com/v1beta1 kind: DpodCloudAgent metadata: namespace: integration name: dpod-cloud-agent-prod spec: discovery: namespaces: - datapower-gateways-ns license: accept: true license: L-GHED-75SD3J use: Production manager: externalHost: dpod-cloud-agent-manager.apps.ocp10.mycluster.com messaging: externalHost: dpod-cloud-agent-messaging.apps.ocp10.mycluster.com storage: className: app-storage version: 1.0.19.0
Validating the Cloud Agent Instance
Using the OpenShift Console
To validate the CustomResource using the OpenShift Console, use the following steps.
Navigate to your OpenShift Console UI.
Navigate to Installed Operators and choose
IBM DataPower Operations Dashboard Cloud Agent
.Click on
DpodCloudAgent
tab and make sure the new CustomResource is inRunning
Phase.
Using the OCP CLI (oc
)
To validate the CustomResource using the oc
CLI, use the following steps.
Execute the following command and make sure the new CustomResource is in PHASE
Running
:# oc get DpodCloudAgent <CR name> -n <Cloud Agent namespace> oc get DpodCloudAgent dpod-cloud-agent-prod -n dpod-cloud-agent