Red Hat: OpenShift Jobs & Templates
- 1 Disclaimer
- 2 Version Information
- 3 Overview
- 3.1 Key Features
- 4 Requirements
- 5 Supported Actions
- 6 Configuration Examples
- 6.1 Example: Apply a Job Definition to the OpenShift Cluster
- 6.2 Example: Create a Template on the OpenShift Cluster
- 6.3 Example: Create a Job from a Template stored on the OpenShift cluster
- 6.4 Example: Create a CronJob on the OpenShift Cluster
- 6.5 Example: Create a Job from a CronJob stored on the OpenShift cluster
- 6.6 Example: Delete a Resource present on the OpenShift Cluster
- 7 Importable Configuration Examples
- 7.1 Initial Preparation Steps
- 7.2 How to "Upload" Definition Files to a Universal Controller
- 7.3 Use Case 1: Deploy a Universal Agent inside an OpenShift Cluster, using it to execute a Linux Task.
- 7.3.1 Description
- 7.3.2 How to Run
- 7.3.3 Expected Results:
- 7.4 Use Case 2: Create an OpenShift Template, Run a Parameterized Job, and Handle Success/Failure
- 7.4.1 Description
- 7.4.2 How to Run
- 7.4.3 Expected Results:
- 8 Input Fields
- 9 Output Fields
- 10 Environment Variables
- 11 Cancelation and Rerun
- 12 Dynamic Commands
- 13 Exit Codes
- 14 STDOUT and STDERR
- 15 How To
- 16 Integration Modifications
- 17 Document References
- 18 Changelog
- 18.1 ue-openshift-jt-1.0.2 (2026-01-22)
- 18.1.1 Enhancements
- 18.1.2 Fixes
- 18.2 ue-openshift-jt-1.0.1 (2025-09-04)
- 18.2.1 Enhancements
- 18.2.2 Fixes
- 18.3 ue-openshift-jt-1.0.0 (2025-05-22)
- 18.1 ue-openshift-jt-1.0.2 (2026-01-22)
Disclaimer
Your use of this download is governed by Stonebranch’s Terms of Use.
Version Information
Template Name | Extension Name | Extension Version | Status |
|---|---|---|---|
OpenShift Jobs & Templates | ue-openshift-jt | V1 (Current 1.0.2) | Fixes and new Features are introduced. |
Refer to Changelog for version history information.
Overview
OpenShift is Red Hat’s enterprise Kubernetes platform for deploying, managing, and scaling containerized applications. It supports resources like Jobs, CronJobs, and Templates for handling tasks such as batch processing, scheduled workloads, and reusable application definitions.
This integration enables the deployment of OpenShift Jobs, CronJobs, and Templates, either imperatively or declaratively, along with retrieval of information pertaining to aforementioned deployments. It also supports deleting these resources from any namespace, providing flexible management across clusters.
Key Features
Feature | Description |
|---|---|
Deploy OpenShift Resources | Deploy Job, CronJob and Template resources, either declaratively or imperatively, from local YAML or JSON files, remote URLs or UAC scripts. |
Deploy Jobs from OpenShift Templates | Deploy parametrized Job instances by processing existing Template objects in the cluster. |
Create Jobs from CronJobs | Create one-off Job resources based on existing CronJob definitions in the cluster. |
Delete OpenShift Resources | Delete Job, CronJob and Template resources via the delete action or manually through a Dynamic Command. Job resources can also be set to be deleted automatically after completion. |
Authentication Options | Supported authentication methods include Basic Authentication and Access Token Authentication. |
Pod Information Streaming | Live stream real-time updates about Kubernetes pods as they are created, modified, or terminated. |
Automatic or Manual Container Information Retrieval | Option to automatically or manually retrieve information, including logs, for all containers pertaining to a specific job. |
Requirements
This integration requires a Universal Agent and a Python runtime to execute the Universal Task.
Area | Details |
|---|---|
Python Version | Requires Python 3.11, tested with Agent bundled python distribution |
Universal Agent Compatibility |
|
Universal Controller Compatibility | Universal Controller Version >= 7.6.0.0. |
OpenShift & Kubernetes | This integration is tested using Red Hat OpenShift Container Platform version 4.18, and the corresponding Kubernetes version 1.31.5. |
Network and Connectivity | Connectivity towards an OpenShift Cluster. |
Supported Actions
There are nine Top-Level actions controlled by the Action Field:
Apply a job definition directly to the cluster.
Apply a parameterized job using a cluster-stored template.
Apply a cronjob to schedule recurring tasks.
Create a new job using a provided specification.
Create a parameterized job based on a cluster-stored template.
Create a one-time job from an existing cronjob definition.
Create a new cronjob for scheduled task execution.
Create and store a reusable job or cronjob template in the cluster.
Delete a job, cronjob, or template by name and project.
Additional options and post action capabilities vary depending on the chosen Action.
Action Output
Output Type | Description | Examples |
|---|---|---|
EXTENSION | The extension output provides the following information:
|
Successful Execution of "Apply Job" Action, with all Extension Output options selected{
"exit_code": 0,
"status_description": "Job applied successfully.",
"invocation": {
"extension": "ue-openshift-jt",
"version": "1.0.2",
"fields": {
...
}
},
"result": {
"containers": [
{
"pod_name": "pi-simple-deploy-2lns6",
"container_name": "pi",
"container_log": "3.1416",
"exit_code": "0",
"restart_count": "0"
},
{
"pod_name": "pi-simple-deploy-2lns6",
"container_name": "hello",
"container_log": "Hello World!\n",
"exit_code": "0",
"restart_count": "0"
}
],
"resource_specification": {
"name": "pi-simple-deploy",
"namespace": "extension",
"active_deadline_seconds": 1800,
"backoff_limit": 6,
"completions": 1,
"parallelism": 1,
"pod_failure_policy": null,
"pod_replacement_policy": "TerminatingOrFailed",
"success_policy": null,
"ttl_seconds_after_finished": 3600,
"suspend": false
}
}
}Failed Execution{
"exit_code": 21,
"status_description": "Resource Naming Error: A job with the name pi-simple-deploy already exists in the namespace extensions",
"invocation": {
"extension": "ue-openshift-jt",
"version": "1.0.2",
"fields": {...}
},
"result": {
"errors": [
"Resource Naming Error: A job with the name pi-simple-deploy already exists in the namespace extensions"
]
}
} |
STDOUT | The STDOUT output produced during the execution of the task instance. Output can be controlled via the STDOUT Options input field. Available choices are:
Both options are only applicable if Wait for Success or Failure has been selected. | Example of Pod Monitoring output and Container logs--- Pod Monitoring Events for job pi-simple-deploy ---
Event: ADDED
Event Time: 2025-MM-DD 16:18:20 +0300
Pod Name: pi-simple-deploy-2lns6
Phase: Pending
Conditions: PodScheduled
Start Time: None
Event: MODIFIED
Event Time: 2025-MM-DD 16:18:21 +0300
Pod Name: pi-simple-deploy-2lns6
Phase: Pending
Conditions: PodReadyToStartContainers, Initialized, Ready, ContainersReady, PodScheduled
Start Time: 2025-MM-DD 13:18:20+00:00
Container: hello
Waiting:
Reason: ContainerCreating
Container: pi
Waiting:
Reason: ContainerCreating
...
Pod pi-simple-deploy-2lns6 has reached phase 'Succeeded' but is pending deletion due to finalizers
Event: MODIFIED
Event Time: 2025-05-09 16:18:25 +0300
Pod Name: pi-simple-deploy-2lns6
Phase: Succeeded
Conditions: PodReadyToStartContainers, Initialized, Ready, ContainersReady, PodScheduled
Start Time: 2025-MM-DD 13:18:20+00:00
Container: hello
Terminated:
Exit Code: 0
Reason: Completed
Started at: 2025-MM-DD 13:18:22+00:00
Finished at: 2025-MM-DD 13:18:22+00:00
Container: pi
Terminated:
Exit Code: 0
Reason: Completed
Started at: 2025-MM-DD 13:18:21+00:00
Finished at: 2025-MM-DD 13:18:22+00:00
Pod pi-simple-deploy-2lns6 has reached phase 'Succeeded'
Pod pi-simple-deploy-2lns6 successfully completed, 1/1 pods completed.
Job pi-simple-deploy execution finished, status is 'SuccessCriteriaMet'.
--- Logs for containers generated for job pi-simple-deploy at 2025-MM-DD 16:26:09 +0300 ---
Logs for pod pi-simple-deploy-lin-systempython-2lns6
Logs for container pi:
3.1416
Logs for container hello:
Hello World!
|
Configuration Examples
Example: Apply a Job Definition to the OpenShift Cluster
This configuration applies a job definition directly to the cluster using basic authentication (username and password). SSL verification is enabled with a custom certificate path. The job definition is fetched from an HTTP link, using the GIT_TOKEN environment variable for access. After the job is applied, the task waits for it to complete and deletes the job from the cluster if it finishes successfully. All output options are enabled, providing additional information on STDOUT and Extension Output.
The configuration for tasks to Create a Job on the cluster remains unchanged compared to the configuration to Apply a Job; only a different action is selected from the available fields. This is true for all deployment types.
Example: Create a Template on the OpenShift Cluster
This configuration creates a template on the cluster using token-based authentication. The resource definition is taken from a local file that resides on the agent. The ‘extensions’ project is specified, meaning the template will be created on that project, unless a different project is specified on the Template Manifest. Information about the deployed template will be shown on Extension Output. Note that while container information is checked to be retrieved, since no containers are tied to the deployment of the Template, no additional information will appear on Extension Output.
Example: Create a Job from a Template stored on the OpenShift cluster
This configuration creates a Job from a cluster-stored template using basic authentication (username and password). Input parameters are specified to customize the job described in the template. The resulting job is created on the cluster, and the task returns immediately, showing the definition of the deployed job on the Extension’s Output.
Example: Create a CronJob on the OpenShift Cluster
This configuration creates a CronJob using basic authentication (username and password), enabling SSL Verification. The Definition of the CronJob to be deployed is retrieved via HTTP Link, using the AUTH_USER and AUTH_PASS environmental variables for authentication. The OpenShift resource specification for the deployed CronJob will be visible on the Extension’s output. However, once again, container information will be omitted as no containers are tied to the deployment of the CronJob.
Example: Create a Job from a CronJob stored on the OpenShift cluster
This configuration creates a job from an existing CronJob, using Token Authentication and specifying the ‘extensions’ project. The job is going to be created based on the CronJob's specification and will have the name specified on the relevant field. The resulting Job is to be monitored, producing information on both output streams. Note that container logs are to be retrieved and displayed only in the case that the job results in failure.
Example: Delete a Resource present on the OpenShift Cluster
This configuration deletes a Template present on the OpenShift cluster. The Resource Type chosen can be a Job, CronJob, or Template. The relevant Resource Name dynamic choice field can be used to load all resources of the specified type.
Importable Configuration Examples
This integration provides importable configuration examples along with their dependencies, grouped as Use Cases to better describe end to end capabilities.
Those examples aid in allowing task authors to get more familiar with the configuration of tasks and related Use Cases. Such tasks should be imported in a Test system and should not be used directly in production.
Initial Preparation Steps
STEP 1: Go to Stonebranch Integration Hub and download the "Red Hat: Openshift Jobs & Templates" integration. Extract the downloaded archive on a directory in a local drive.
STEP 2: Locate and import the above integration to the target Universal Controller. For more information refer to the "How To" section in this document
STEP 3: For "Red Hat: Openshift Jobs & Templates", inside directory named "configuration_examples" you will find a list of definition zip files. Upload them one by one respecting the order presented below, by using the "Upload" functionality of Universal Controller:
variables.zip
credentials.zip
scripts.zip
tasks.zip
workflows.zip
STEP 4: Update the uploaded UAC Credential entity introduced with proper Username / Password
STEP 5: Update the UAC global variables introduced with variables.zip file. Their name is prefixed with "ue_openshift_jt". Review the descriptions of the variables as they include information on how they should be populated.
STEP 6 (Only for UC1): Create an OMS record on the Universal Controller. Ensure the address matches the one assigned to the relevant global variable from the previous step. For more information refer to the Creating OMS Server Records in this document.
STEP 7 (Only for UC1): Create an Agent Cluster on the Universal Controller. Ensure the name matches the one assigned to the relevant global variable from the previous step. For more information refer to the Creating an Agent Cluster in this document,
The order indicated above ensures that the dependencies of the imported entities need to be uploaded first.
How to "Upload" Definition Files to a Universal Controller
The "Upload" functionality of Universal Controller allows Users to import definitions exported with the "Download" functionality.
Login to Universal Controller and:
STEP 1: Click on "Tasks" → "All Tasks"
STEP 2: Right click on the top of the column named "Name"
STEP 3: Click "Upload..."
In the pop-up "Upload..." dialogue:
STEP 1: Click "Choose File".
STEP 2: Select the appropriate zip definition file and click "Upload".
STEP 3: Observe the Console for possible errors.
Use Case 1: Deploy a Universal Agent inside an OpenShift Cluster, using it to execute a Linux Task.
Description
In this Use Case, the workflow demonstrates how to deploy a Universal Agent inside an OpenShift cluster, execute a Linux task via the deployed agent, and then clean up by deleting the agent.
The tasks configured demonstrate the following capabilities among others:
Containerized deployment of a Universal Agent inside an OpenShift cluster using the ue-openshift-jt extension.
Execution of a Linux task through a Universal Agent deployed on an OpenShift Cluster.
The components of the solution are described below:
"UC1: OpenShift JT - Single Container Agent Deployment" – Deploys a Universal Agent inside the configured OpenShift cluster using the ue-openshift-jt extension.
"UC1: Linux Task Execution" – Executes a placeholder Linux task using the deployed Universal Agent.
"UC1: OpenShift JT - Deployed Agent Deletion" – Deletes the previously deployed Universal Agent, cleaning up all resources, using the ue-openshift-jt extension.
How to Run
Execution Steps:
STEP 1: Manually start the workflow. This triggers the first task, "UC1: OpenShift JT - Single Container Agent Deployment", which deploys the agent inside the target OpenShift cluster.
STEP 2: Once the agent is deployed, the second task, "UC1: Linux Task Execution", runs a placeholder Linux command through the Universal Agent.
STEP 3: After the Linux task completes, the final task, "UC1: OpenShift JT - Deployed Agent Deletion", deletes the deployed agent, ensuring no leftover resources remain.
Expected Results:
The Universal Agent is successfully deployed inside the OpenShift cluster.
The Linux task is executed via the agent.
The Universal Agent is deleted after task execution, leaving no residual resources in the cluster.
Use Case 2: Create an OpenShift Template, Run a Parameterized Job, and Handle Success/Failure
Description
In this Use Case, the workflow demonstrates how to create an OpenShift Template, generate a parameterized Job from it, handle the job’s success or failure, and then perform appropriate cleanup. If the Job fails, an email notification is sent; if the Job succeeds, both the Job and the Template are deleted.
The tasks configured demonstrate the following capabilities among others:
Creation of an OpenShift Template using the ue-openshift-jt extension.
Generation and execution of a parameterized Job from the created Template.
Conditional execution of an email notification task in case of Job failure.
Cleanup by deleting the OpenShift Template and Job resources in case of Job success.
The components of the solution are described below:
"UC2: OpenShift JT - Template Creation" – Creates an OpenShift Template in the target OpenShift cluster.
"UC2: OpenShift JT - Parametrized Job Creation" – Generates a Job from the created Template and executes it with specific parameters.
"UC2: OpenShift JT - Send Email" – Sends an email notification if the Job fails.
"UC2: OpenShift JT - Template Deletion" – Deletes the created Template and Job if the Job was successful.
How to Run
Execution Steps:
STEP 1: Manually start the workflow. This triggers the first task, "UC2: OpenShift JT - Template Creation", which creates the OpenShift Template inside the cluster.
STEP 2: The second task, "UC2: OpenShift JT - Parametrized Job Creation", runs next. It uses the previously created Template to create and execute a parameterized Job.
STEP 3:
If the Job fails, the third task, "UC2: OpenShift JT - Send Email", sends a failure notification email.
If the Job succeeds, the job is automatically deleted, and the final task, "UC2: OpenShift JT - Template Deletion", deletes the Template, leaving the cluster in its original state.
Expected Results:
The OpenShift Template is successfully created in the cluster.
The parameterized Job runs from the created Template.
In case of failure, an email notification is sent.
On success, both the Job and Template are deleted, ensuring no residual resources remain in the cluster.
Input Fields
Name | Type | Description | Version Information |
|---|---|---|---|
Action | Choice | The action to be executed. Options:
| Introduced in 1.0.0 |
OpenShift Server and Port URL | Text | The OpenShift Server and Port information. Ensure you specify the appropriate URL for API communication. Example: https://api.example.openshift.com:6443 | Introduced in 1.0.0 |
Project | Text | The OpenShift project (namespace) in which resources are to be deployed or deleted. In the case of resource deployment, this field will be used only as a fallback in case a ‘namespace’ value is not specified inside of the Resource Definition. | Introduced in 1.0.0 |
Authentication Method
| Choice | The Authentication Method to be used. Options:
| Introduced in 1.0.0 |
OpenShift Credentials | Credentials | The Credentials used to login to OpenShift. The definition of the Credential should be as follows:
Only available if Authentication Method is set to “Basic Authentication” | Introduced in 1.0.0 |
OpenShift Credentials
| Credentials | The credentials used to login to OpenShift. The definition of the Credential should be:
Note that the hash algorithm prefix should be included on the token string. Only available if Authentication Method is set to “Access Token” | Introduced in 1.0.0 |
Enable SSL Verification | Checkbox | Enable to verify the server's SSL certificate. Disable to skip verification. | Introduced in 1.0.0 |
SSL Certificate Path | Text | Path to custom CA certificate. If this field is left empty, the Mozilla-maintained CA Bundle will be used. | Introduced in 1.0.0 |
Resource Definition Source | Choice | Specifies the source of the Job, CronJob, or Template manifest. The input must be in YAML or JSON format and must define exactly one resource.
Available if Action is set to any of “Apply Job”, “Apply CronJob”, “Create Job”, “Create CronJob”, “Create Template” | Introduced in 1.0.0 |
Resource Definition Script | Script | The UAC Script containing the Resource definition. Mandatory if Resource Definition Source is set to “UAC Script” | Introduced in 1.0.0 |
Resource Definition File Path | Text | The Agent’s local full filename path to the Resource Definition. Mandatory if Resource Definition Source is set to “Local File” | Introduced in 1.0.0 |
Resource Definition Link | Text | The HTTP link that points to the Job Definition or CronJob/Template Manifest. Environment variables in the URL are resolved; The GIT_TOKEN environmental variable, if set, is used as a Bearer token for authorization. Mandatory if Resource Definition Source is set to “HTTP Link” | Introduced in 1.0.0 |
Template to Use | Dynamic Choice | The Template to use to create a new Job. Available & Mandatory if action is set to “Apply Job from Template”, “Create Job from Template” | Introduced in 1.0.0 |
Input Parameters | Array Field | Parameters to use when creating Job from Template. Available & Mandatory if action is set to “Apply Job from Template”, “Create Job from Template” | Introduced in 1.0.0 |
CronJob to Use | Dynamic Choice | The CronJob to use when creating Job from CronJob. Available & Mandatory if action is set to “Create Job from CronJob” | Introduced in 1.0.0 |
New Job Name | Text | The name that the created Job should have. Available & Mandatory if action is set to “Create Job from CronJob” | Introduced in 1.0.0 |
Resource Type | Choice | The type of Resource to be deleted. Available options are:
Available & Mandatory if Action is set to “Delete Job” | Introduced in 1.0.0 |
Resource Name | Dynamic Choice | The name of the Resource to be deleted. Available & Mandatory if Action is set to “Delete Job” | Introduced in 1.0.0 |
Wait for Success or Failure | Checkbox | Choose where the UAC task should wait until the deployed Job is completed. Available if Action is set to “Apply Job”, “Apply Job from Template”, “Create Job”, “Create Job from Template”, “Create Job from CronJob” |