Red Hat OpenShift Operator Start-Up Guide

Introduction

This start-up guide describes how to deploy a Stonebranch Universal Agent to an Openshift namespace using the Universal Agent Operator from the Red Hat Marketplace.

When Universal Agent has been deployed, it can be used to send and receive files from any external server or cloud storage and save it to a shared persistence storage system on the related OpenShift namespace.

Any application in OpenShift that has access to the same shared persistence storage can make use of the received files or place data to the shared persistence storage for further file transfer using Universal Agent.

At the end of this tutorial, an example scenario is provided showing how to transfer data from a Linux Server to a shared persistence storage system.

Step 1  Prerequisites

Universal Agent is deployed as POD in a namespace of a Project.

Create a Project

Create a project with namespace newsflash. If you already have a project where you would like to deploy Universal Agent, you can skip this step.


Create a Persistent Volume Claim

Create Persistent Volume Claim my-customer-pvc with size 500MiB.  If you already have a Persistent Volume Claim, you can skip this step.


Step 2  Operator Installation

The Operator can be installed either:

  • Directly from the Marketplace.
  • From within the OpenShift Web Console.

Operator Installation from Red Hat Marketplace

  1. For information on registering your cluster and creating a namespace, see Red Hat Marketplace Docs. This must be done prior to operator install.
  2. On the main menu, click Workspace > My Software > product > Install Operator.
  3. On the Update Channel section, select an option.
  4. On the Approval Strategy section, select either Automatic or Manual. The approval strategy corresponds to how you want to process operator upgrades.
  5. On the Target Cluster section:
    • Click the checkbox next to the clusters where you want to install the Operator.
    • For each cluster you selected, under Namespace Scope, on the Select Scope list, select an option.
  6. Click Install. It may take several minutes for installation to complete.
  7. Once installation is complete, the status will change from installing to Up to date.
  8. For further information, see the Red Hat Marketplace Operator documentation.

Operator Installation from within the OpenShift Web Console  

This section describes how to install the Universal Agent Operator on an OpenShift Cluster within the OpenShift Web Console 

1  Deploy the Operator from the Red Hat Marketplace

In the OpenShift Web Console, select: Operators -> OperatorHub and search for stonebranch, Provider Type: Marketplace.


2  Install the Operator

 On the Update Channel section, select an option.

  • On the Approval Strategy section, select either Automatic or Manual. The approval strategy corresponds to how you want to process operator upgrades.
  • Select the namespace of your application e.g.,
  • Click Install. It may take several minutes for installation to complete.
  • Once installation is complete, the status will change from installing to Up to date.
  • For further information, see the Red Hat Marketplace Operator documentation.

Step 3  Verification of Operator Installation

  1. When the status changes to Up to date, click the vertical ellipses and select Cluster Console.
  2. Open the cluster where you installed the product.
  3. Go to Operators > Installed Operators.
  4. Select the Namespace or Project you installed on.
  5. Verify status for product is Succeeded.
  6. Click the product name to open details.

 

Step 4  Adjust the Operator YAML (CSV)

When the Operator is installed, it needs to be adjusted to the values according to your Universal Controller landscape.

Any adjustment is done in the YAML file of the Universal Operator.

Universal Controller / OMS Setting

The following tables identifies the Parameters that you must set for Universal Agent to connect to the related Universal Controller OMS.

Parameter

Description

UAGAENGTCLUSTERS

Name of the OPENSHIFT Application ( needs to be the same name as the Universal Controller Agent cluster configured in .. )

UAGOMSSERVERS

IP and PORT of the Universal Controller Message Middleware OMS

UAGTRANSIENT

“yes” : Transient means that the agent will be deleted or decommissioned, when the Agent shuts down or goes offline.

 In the example shown, Universal Agent would connects to the OMS Port 7878 on the server 192.168.88.40 . The Agent would register to the Universal Controller Agent Cluster: AGENT_CLUSTER_APP_NEWSFLASH.

The POD, where the Agent is installed in, would use the Persistent Volume Claim my-custom-pvc.

Note

Agent Cluster AGENT_CLUSTER_NEWSFLASH must exist in Universal Controller; if it does not, create it manually.

Universal Controller Setting

YAML

      "omsAutoStart": "no",

       "uagAgentClusters": "AGENT_CLUSTER_APP_NEWSFLASH",

      "uagAutoStart": "yes",

      "uagEnableSsl": "yes",

      "uagNetname": "OPSAUTOCONF",

      "uagOmsServers": "7878\\\\@192.168.88.40",

     "uagTransient": "yes",

      "uagUsername": "stonebranch",

      "uemAutoStart": "yes"

PersistentVolumeClaim Settings

Here you can define if an existing Shared Persistent Volume (“useExistingSharedStorage”) or a new Shared Persistent Volume should be created (“createSharedPersistentVolumeClaim”).

This example uses the existing Shared Persistent Volume "my-custom-pvc", which was created in Step 1  Prerequisites.

YAML

      "createSharedPersistentVolumeClaim": {

      "enable": false,

      "storage_class": "ibmc-file-bronze-gid",

      "storage_size": "5Gi"

       },

       "useExistingSharedStorage": {

       "enable": true,

       "storage_name": "my-custom-pvc"


Step 5  Create an Instance

After defining the Universal Agent connection settings in the YAML file, an instance of a POD with a Universal Agent can be created.  

In this example, we create an instance named stonebranch-sample.


After you click Create instance, the stonebranch-sample-sb-agent POD in the namespace newsflash is created.



Step 6  Verify the Universal Agent POD

Under Deployments, the started stonebranch-sample-sb-agent POD in the namespace newsflash is shown.

This POD contains a Universal Agent, which can send and receive files.

Universal Agent POD Log-file

The log file shows the Agent connected to the OMS of the Universal Controller: 7878@192.168.88.40

Step 7  Verify that the Universal Agent in OpenShift is Connected to Universal Controller

The Universal Agent should have been automatically registered to the Agent Cluster AGENT_CLUSTER_NEWSFLASH, which was  defined in the YAML set-up in Step 4.


Note

Agent Cluster AGENT_CLUSTER_NEWSFLASH must exist in Universal Controller; if it does not, create it manually.

Summary

Universal Agent has been registered in Agent Cluster AGENT_CLUSTER_NEWSFLASH. As a result, it can be used to send and receive files from any external server or cloud storage and save them to the shared persistence storage configured in the YAML set-up in Step 4.

In addition to file transfers, Universal Agent can also trigger any type of API (REST, JDBC, SAP RFC, ...) for scheduling any type of Application installed in OpenShift.  

Step 8  Sample Scenario - File Transfer

The following sample scenario shows how a basic file transfer to a shared persistence storage can be set-up, and how the transferred data is accessible by any application having access to the shared persistence storage in OpenShift.

As sample application 2 NGNIX Webserver will be deployed in the newsflash project, which was set-up in Step 1.

The NGNIX Webserver will have access to the shared persistence storage, meaning that if a file has been received by the Universal Agent and saved on a folder in the shared persistence storage, it should also be accessible by the 2 NGNIX webserver for further processing.

Configuration Steps

The installation consists of three steps:

  1. Define an Agent Cluster in Universal Controller
  2. Configure the file transfer workflow using the Universal Controller web GUI.
  3. Deploy two sample NGNIX webservers in OpenShift within the newsflash namespace.

1  Define a new Agent Cluster for the OpenShift application newsflash

An agent cluster must be configured in Universal Controller for each Project in OpenShift. When a POD is started in OpenShift, the Universal OpenShift Agent will automatically register with the Universal Controller agent cluster defined in the CSV YAML file in Step 4.

The agent cluster in this example is named AGENT_CLUSTER_APP_NEWSFLASH.

2  Universal Controller File Transfer Task

A standard UDM file transfer task can be used to send and retrieve data from the Universal Agent in Openshift. The following File Transfer task transfers two files from an on-premise Linux Server to the OpenShift share folder: /podshare (persistence volume claim: my-custom-pvc):

  • copy src=/home/nils/demo/index.html dest=/podshare/index.html
  • copy src=/home/nils/demo/redhat.png dest=/podshare/redhat.png


File Transfer UDM Script

The following file transfer script is used.

open dest=${ops_agent_ip} src=192.168.88.40 user=$(ops_src_cred_user) pwd=$(ops_src_cred_pwd)
attrib dest createop=replace
copy src=/home/nils/demo/index.html dest=/podshare/index.html
copy src=/home/nils/demo/redhat.png dest=/podshare/redhat.png

Description

  • Source Credentials: Linux-OS-Credentials (adjust according to your server credentials)
  • Source Linux Server: 192.168.88.40 (adjust according to your server)
  • Source Folder Linux Server: /home/nils/demo/out (adjust according to your server)
  • Files to Transfer: index.html, RedHat.png
  • Destination Agent Cluster: AGENT_CLUSTER_APP_NEWSFLASH
  • Destination Folder in the POD: /podshare/ (this is the mounted POD directory)

The export files of the file transfer task can be found here filetransfer task.

File Transfer Log File

The log file shows that the two files have been transferred to the /podshare/ folder.

Check File Arrival in the POD Using the Openshift Web Console

Both transferred files are available in the folder /podshare/.

3  Deploy an application, with access to the transferred data

The following deployment YAML deploys 2 PODS with a nginx webserver.

The nginx webserver uses the same persistent Volume Claim as the Universal Agent used for the file transfer.

Persistent Volume Claim: my-custom-pvc

 


After clicking Create, two nginx webserver PODs have been started based on the given deployment.

Both PODS can access the two files transferred to the folder /podshare/ because all PODs are using the persistentVolumeClaim: my-custom-pvc

First nginx webserver Pod with access to the transferred files

As you can see both files, index.html and redhat.png are available in the first nginx POD

Second nginx webserver Pod with access to the transferred files

As you can see both files index.html and redhat.png are available also in the second nginx POD

The use case showed how to transfer data from a Linux Server to a persistent Volume Claim in an Openshift Cluster. As a result, the transferred data is accessible by any application having access to the shared persistence storage. In our case the two started nginx webserver could access that file transferred from the Linux Server to OpenShift

Summary

The Universal Automation Center, with the introduction of the newly developed Operator for Universal OPENSHIFT Agent, enables the secure and reliable transfer of business data located on the mainframe, on cloud storage platforms, or any server, to any shared persistence storage connect to your OPENSHIFT cluster (and vice versa).