Azure OpenAI
Disclaimer
Your use of this download is governed by Stonebranch’s Terms of Use.
Version Information
Template Name | Extension Name | Version | Status |
|---|---|---|---|
Azure OpenAI | ue-azure-openai | 1 (Current 1.1.0) | Fixes and new Features are introduced |
Refer to Changelog for version history information.
Overview
The Azure OpenAI Integration brings the power of enterprise-grade large language models into your UAC workflows, enabling intelligent automation that can understand, generate, and transform content. By connecting your automation workflows to Azure-hosted AI models, you can build solutions that combine the reliability and orchestration capabilities of UAC with the reasoning and language capabilities of modern LLMs.
This integration opens up a wide range of automation possibilities: from analyzing logs and generating human-readable summaries of complex system states, to intelligently routing and classifying data, transforming unstructured information into structured formats, or even making context-aware decisions within your workflows.
The integration works with both Azure OpenAI Service and Azure AI Foundry platforms, giving you flexibility in how you deploy and manage your AI models. Prompts can be enriched by files or UAC Variables and outputs can be persisted for reuse across multiple tasks and workflows, enabling you to chain AI operations together or share insights across your automation ecosystem.
Key Features
Feature | Description |
|---|---|
Ask AI |
|
Response Output | Stores the LLM response as a local file. |
| Prompt Enrichment | Enriches prompts with data from files and UAC Variables |
| Configurable data retention controls | Configurable data retention controls via Allow Conversation Data Retention parameter, with support for zero data retention when used with Modified Abuse Monitoring. For more information refer to Data Retention and Privacy |
Requirements
This integration requires a Universal Agent and a Python runtime to execute the Universal Task.
Area | Details |
|---|---|
Python Version | Requires Python 3.11, Tested with Agent bundled python distribution. |
Universal Agent Compatibility | Compatible with Universal Agent for Windows x64 and version >= 7.9.0.0. Compatible with Universal Agent for Linux and version >= 7.9.0.0. |
Universal Controller Compatibility | Universal Controller Version >= 7.9.0.0. |
Network and Connectivity | Network Connectivity towards Azure OpenAI is required. |
Supported Actions
Ask AI:
- Ask AI (Chat Completions API) - Sends a System and a User prompt to models using the Chat Completions API. Chat Completions API is the traditional approach and has bigger support on supported models (including some older cheaper models, and non-OpenAI models that are compatible with OpenAI API spec like Llama-3.3-70B-Instruct)
- Ask AI (Responses API) - Sends a System and a User prompt to models using the Responses API. It's in a way the evolution of Chat Completions API with less coverage on models (some older OpenAI modules and 3rd party non-OpenAI models are not supported)
As of current implementation, both APIs are used in a stateless mode. As a result, they work similarly and in terms of functionality present the same capabilities.
Configuration examples
Example: Ask AI (Chat Completions API) with Service Principal authentication and save the response to a Local Path
In this example, the user configures a Universal Task to invoke the Chat Completions API using Azure Service Principal authentication and the gpt-4o model. The task is configured to capture the AI-generated response in the EXTENSION output rather than printing it to STDOUT. Finally, the user enables the display of token usage metadata, providing visibility into the resources consumed during response generation.
Example: Ask AI (Responses API) with API Key authentication and fetch prompts from files
In this example, the user configures a Universal Task to invoke the Responses API using API Key authentication. Instead of defining the prompt inline, the task retrieves one or more prompts from files stored on disk. The user has also configured the task to display the entire conversation in both STDOUT and the EXTENSION output.
Example: Ask AI (Responses API) with API Key authentication and fetch system prompt from a UAC Variable
In this example, the user configures a Universal Task to invoke the Responses API using API Key authentication. The system prompt is dynamically retrieved from the ops_task_name UAC Variable. The task is configured to print the AI-generated response to STDOUT and EXTENSION output
Example: Ask AI (Chat Completions API) with API Key authentication and fetch the prompts from UAC Scripts
In this example, the user configures a Universal Task to invoke the Chat Completions API using API Key authentication. Both the system prompt and the user prompt (conversation thread) are sourced from UAC Scripts. The task is also configured to display the entire conversation in STDOUT as well as in the Extension Output.
Example: Ask AI (Chat Completions API) with API Key using Azure Foundry endpoint and a 3rd-party Model
In this example, the user configures a Universal Task to invoke the Chat Completions API using API Key authentication against an Azure Foundry endpoint. Instead of an OpenAI model, the task is configured to use the Llama-3.3-70B-Instruct third-party model. The task is configured to display the latest response in STDOUT as well as in the EXTENSION Output.
Action Output
Output Type | Description | Examples |
|---|---|---|
EXTENSION | The extension output provides the following information:
| |
STDOUT | When configured, shows the "Latest Response Only" or the "Entire Conversation" with the LLM in a human readable way. | |
STDERR | Shows the logs from the Task Instance execution. The verbosity is controlled by the Task configuration Log Level. |
Importable Configuration Examples
This integration provides importable configuration examples along with their dependencies, grouped as Use Cases to better describe end to end capabilities.
Those examples aid in allowing task authors to get more familiar with the configuration of tasks and related Use Cases. Such tasks should be imported in a Test system and should not be used directly in production.
Initial Preparation Steps
- STEP 1: Go to Stonebranch Integration Hub and download "Azure OpenAI", "UAC Utility: Email Monitor", and "ServiceNow: Incident" integrations.
- STEP 2: Locate and import the above integrations to the target Universal Controller. For more information refer to the How To section in this document.
- STEP 3: Extract the Azure OpenAI archive; Inside the directory named "configuration_examples" you will find a list of definition zip files. Upload them one by one in the order given below, by using the "Upload" functionality of the Universal Controller:
- 1_variables.zip
- 2_credentials.zip
- 3_scripts.zip
- 4_tasks.zip
- 5_monitors.zip
- 6_workflows.zip
- 7_triggers.zip
- STEP 4: Update the uploaded UAC Credential entities for both Azure OpenAI, Email Monitor, and ServiceNow Incident.
- STEP 5: Update the UAC global variables introduced with the 1_variables.zip file. Their names are prefixed with "ue_azure_openai". Review the descriptions of the variables as they include information on how should be populated.
- The order indicated above ensures that the dependencies of the imported entities need to be uploaded first.
- All imported entities are prefixed with UC1: Azure OpenAI.
How to "Upload" Definition Files to a Universal Controller
The "Upload" functionality of Universal Controller allows Users to import definitions exported with the "Download" functionality.
Login to Universal Controller and:
- STEP 1: Click "Tasks"→"All Tasks"
- STEP 2: Right click on the top of the column named "Name"
- STEP 3: Click "Upload..."
In the pop-up "Upload..." dialogue:
- STEP 1: Click "Choose File".
- STEP 2: Select the appropriate zip definition file and click "Upload".
- STEP 3: Observe the Console for possible errors.
Use Case 1: Email Classification, Output Transformation & Automated Incident Creation on ServiceNow
Description
This workflow uses Azure OpenAI to analyze incoming emails and send them to the right process. The Email Monitor integration watches a mailbox and creates a Universal Event for each new message, triggering a UAC Workflow. Azure OpenAI UAC Tasks then classify the email as a purchase request, an infrastructure IT issue, or an undefined case that needs human review.
If the email is an infrastructure IT request, another Azure OpenAI Task converts the email content into structured JSON, which the ServiceNow Incident integration uses to automatically create an Incident. This demonstrates end-to-end branching and data transformation in UAC powered by Azure OpenAI.
The tasks configured demonstrate the following capabilities among others:
- Capability to trigger a task or a workflow based on received Emails.
- Propagation of useful information (like Email attributes) to downstream tasks.
- Utilize Azure OpenAI Tasks to classify information and transform it into structured formats.
- Handle business cases where the automatic creation of ServiceNow Incidents is necessary.
The components of the solution are described below:
- Represents the receipt of an e-mail to a Mailbox.
- "UC1: Azure OpenAI - Monitor Email from Microsoft Server (Event Producer)" - Subscribes to Microsoft Server, monitors emails, saves attachments and emails on the Agent's filesystem and sends Universal Event for each monitored email.
- "UC1: Azure OpenAI - Universal Monitor: Wait for Email (Event Consumer)" - Monitors the Universal Events published from "UC1: Azure OpenAI - Monitor Email from Microsoft Server (Event Producer)"
- "UC1: Azure OpenAI - Trigger Workflow" - Triggers the configured workflow "UC1: Azure OpenAI - Email Classification & Transformation" when the Monitor condition on (3) is true.
- "UC1: Azure OpenAI - Email Classification Task" - Calls gpt-4 to classify the contents of the received e-mail with one of the following values "purchase", "infrastructure", and "undefined".
- "UC1: Azure OpenAI - Email Transformation to Incident Task" - Transforms the contents of the received e-mail into a "ServiceNow Incident"-compatible JSON.
- "UC1: Azure OpenAI - Create a ServiceNow Incident Task" - Creates a ServiceNow Incident using the JSON produced in the previous step.
- "UC1: Azure OpenAI - Forward Purchase Request To Finance" - Task representing actions related to forwarding a purchase request to a Finance department.
- "UC1: Azure OpenAI - Represents Manual Actions" - Task representing actions related to introducing human intervention in cases that were unable to be classified by the "Azure OpenAI: Email Classification Task".
How to Run
Execution Steps:
- STEP 1: Enable Trigger "UC1: Azure OpenAI - Trigger Workflow". This automatically starts "UC1: Azure OpenAI - Universal Monitor: Wait for Email (Event Consumer)" and "UC1: Azure OpenAI - Monitor Email from Microsoft Server (Event Producer)"
- STEP 2: Send an e-mail to the address monitored by "UC1: Azure OpenAI - Universal Monitor: Wait for Email (Event Consumer)".
- STEP 3: Review the outputs of (7,8,9) depending on the classification produced by (5).
Expected Results:
- Mail is received, successfully monitored, and a workflow is launched.
- Task "UC1: Azure OpenAI - Email Classification Task" successfully produces a JSON-structured classification with one of the following values: "purchase", "infrastructure", "undefined".
- If the classification label is "infrastructure", Task "UC1: Azure OpenAI - Email Transformation to Incident Task" launches successfully.
Ready-to-use e-mail messages:
Input Fields
| Name | Type | Description | Version Information |
|---|---|---|---|
| Action | Choice | The action performed upon the task execution.
| Introduced in 1.0.0 |
| Azure Open AI / AI Foundry Endpoint | Text | The project endpoint provided by the Azure platform. The Foundry endpoint enables access to a wider range of models. | Introduced in 1.0.0 |
| Authentication Method | Choice | The authentication method used to connect to the Azure OpenAI / Foundry platform. Authentication methods currently supported:
| Introduced in 1.0.0 |
| Azure Credentials | Credentials | The Azure Credentials to access the Azure OpenAI / Foundry platform. Incase of Authentication Method = 'Service Principal' the definition of the credential should be as follows:
When Authentication Method = 'API Key' the API Key needs to be placed in the Token field. | Introduced in 1.0.0 |
| Azure Tenant ID | Text | The Azure AD Tenant ID. This field is visible when "Service Principal" Authentication Method is selected. | Introduced in 1.0.0 |
| Azure Subscription | Dynamic Choice | The Azure Subscription ID. This field is visible when "Service Principal" Authentication Method is selected. | Introduced in 1.0.0 |
| Resource Group | Dynamic Choice | The Azure Resource group related to the Azure OpenAI Resource. This field is visible when "Service Principal" Authentication Method is selected. | Introduced in 1.0.0 |
| Model | Text / Dynamic Choice | The AI Model (deployment) to be used. It is converted to a Dynamic Choice Field when "Service Principal" Authentication Method is selected. | Introduced in 1.0.0 |
| System Prompt Source | Choice | System Prompt sources available:
| Introduced in 1.0.0 |
| System Prompt | Text / Script | The system prompt to be sent to the AI model. This field supports Dynamic Placeholders. Users can type the following template functions and place them anywhere within the System Prompt or the Conversation Thread fields, constructing the prompt in a dynamic way:
It is a Script input field when"Prompt Library (UAC Script)" System Prompt Source is selected, else it is Text. Dynamic Placeholders work with both Text and Script field types. | Introduced in 1.0.0 |
| Conversation Thread Source | Choice | Conversation Thread sources available:
| Introduced in 1.0.0 |
| Conversation Thread | Text / Script | The Conversation Thread (user prompt) to be sent to the AI model. This field supports Dynamic Placeholders. Users can type the following template functions and place them anywhere within the System Prompt or the Conversation Thread fields, constructing the prompt in a dynamic way:
It is a Script input field when "Prompt Library (UAC Script)" Conversation Thread Source is selected, else it is Text Dynamic Placeholders work with both Text and Script field types. | Introduced in 1.0.0 |
| Response Format | Choice | Provide the capability to select the output format of the AI response. Response Formats available:
For more information please read Structured Outputs | Introduced in 1.0.0 |
| JSON Schema Response | Script | Provide the JSON Schema to format the AI model's output. This field is visible when "JSON Schema (API Enforced)" Response Format is selected. For more information please read Structured Outputs | Introduced in 1.0.0 |
| Save Options | Choice | Specify the content to be saved from the response. The following Save Options are available:
| Introduced in 1.0.0 |
| Save To | Choice | Specify the save location of the response. The following options are available:
This field is visible when either "Save Entire Conversation" or "Save Latest Response Only" Save Options is selected. | Introduced in 1.0.0 |
| Local Path | Text | Provide the file path to save the response. If the file does not exist, the extension will attempt to create it. If the file already exists it will be overwritten. This field is visible when either "Save Entire Conversation" or "Save Latest Response Only" Save Options is selected. | Introduced in 1.0.0 |
| Dry Run | Checkbox | Validate inputs without calling the Azure API. Useful for troubleshooting and testing purposes. | Introduced in 1.0.0 |
| Advanced Options | Checkbox | Enable advanced options. | Introduced in 1.0.0 |
| Temperature | Float | Control the Sampling Temperature of the response. This field is visible when Advanced Options is selected. | Introduced in 1.0.0 |
| Top P | Float | Control the Nucleous Sampling of the response. Not all models support this value and may produce an error when set. In addition, between models, the accepted range is different. This field is visible when Advanced Options is selected. | Introduced in 1.0.0 |
| Frequency Penalty | Float | Control the repetition of the response. Not all models support providing this value and may produce an error when set. In addition, between models, the accepted range is different. This field is visible when Advanced Options is selected. | Introduced in 1.0.0 |
| Presence Penalty | Float | Encourage new Topics in the response. Not all models support providing this value and may produce an error when set. In addition, between models, the accepted range is different. This field is visible when Advanced Options is selected. | Introduced in 1.0.0 |
| Max Tokens | Integer | Specify the maximum number of tokens to be generated by the LLM. This field controls the length of the response. If the response attempts to exceed the limit, it is abruptly stopped. This field is visible when Advanced Options is selected. | Introduced in 1.0.0 |
| Allow Conversation Data Retention | Checkbox | Controls whether Azure stores conversation data for Stored Completions and Responses API features. When disabled, sets Note: This parameter does not control Azure's abuse monitoring system. For complete zero data retention, see the Data Retention and Privacy section Data Retention and Privacy | Introduced in 1.0.0 |
| STDOUT Output | Choice | Specify the information to be displayed in STDOUT. The following options are available:
| Introduced in 1.0.0 |
| EXTENSION Output | Multiple Choice | Specify the information to be displayed in the EXTENSION output. The following options are available:
| Introduced in 1.0.0 |
Structured Outputs
The Azure OpenAI Integration offers three ways to control how the AI model formats its responses. This guide explains each option and helps you choose the right one for your needs.
Text
Plain text responses with no formatting constraints. The model responds naturally in plain text. You describe what you want in your prompt, and the model does its best to follow your instructions. Best used for natural language responses without structure.
However, an "N shot inference strategy" in your prompt can produce good results for a structured response. This mode is helpful when working with old cheaper models when JSON structure is not API provided.
JSON (Prompt Described)
The model returns JSON, but you describe the structure in your prompt. The model guarantees valid JSON syntax, but you must describe what fields and structure you want in your prompt. The model will try to follow your description, however there's no strict enforcement. This strategy shows very good and consistent results for majority of the cases.
How it works:
- The API ensures the response is valid JSON (proper brackets, quotes, commas)
- You describe the desired structure in your prompt
- The model attempts to match your description, but may not strictly adhere to it.
Requirements:
- Your prompt must mention JSON - The API requires this. Without it, you'll get an error.
- You should describe the desired structure in your system prompt
JSON Schema (API Enforced)
You provide a detailed JSON Schema, and the API guarantees the model's response will match it exactly - correct fields, correct types, no extras, no omissions.
- You provide a formal JSON Schema definition
- The API enforces 100% compliance
- No prompt engineering needed - the schema does the work
Limitations
- Only supported on newer Azure OpenAI models
Additional Information:
Model Compatibility
Microsoft does not provide a comprehensive compatibility matrix for models, specifically to the capability of JSON Structured response.
It is suggested that you test your desired configuration covering some edge cases, observe errors if any, and make sure that output meet your needs.
Environment Variables
Environment Variables can be set from the Environment Variables task definition table. The following environment variables can affect the behavior of the extension.
| Environment Variable Name | Description | Version Information |
|---|---|---|
| UE_HTTP_TIMEOUT | Specifies the timeout (in seconds) for HTTP requests made by the Task Instance. A higher value allows for slower responses, while a lower value enforces stricter time constraints. If not set, a default of 60 seconds is used. When a timeout happens, the Task Instance ends in failure. | Introduced in 1.0.0 |
Exit Codes
| Exit Code | Status | Status Description | Meaning |
|---|---|---|---|
| 0 | Success | “Task executed successfully.“ | Successful Execution |
| 1 | Failure | “Execution Failed: <<Error Description>>” | Generic Error. Raised when not falling into the other Error Codes. |
| 20 | Failure | “Data Validation Error: <<Error Description>>“ | Input fields validation error. |
STDOUT and STDERR
STDOUT is used for displaying Job information. It is controlled by STDOUT Options field. STDERR provides additional information to the user. The level of detail is tuned by the Log Level Task Definition field.
Backward compatibility is not guaranteed for the content of STDOUT/STDERR and can be changed in future versions without notice
Data Retention and Privacy
Overview
The Azure OpenAI Integration provides configurable data retention controls through the Allow Conversation Data Retention parameter. This section explains how data retention works, what the integration controls, and what additional configuration may be required for your organization's compliance needs.
What the Integration Controls
When the Allow Conversation Data Retention parameter is disabled (default behavior), the integration sets the store parameter to false in all API calls to Azure OpenAI Service and Azure AI Foundry. This disables the following Azure-managed storage features:
- Stored Completions: Input-output pairs are not stored for evaluation or fine-tuning purposes
- Responses API State: Response data is not retained beyond the immediate request/response cycle
What the Integration does not control
Stonebranch Universal Automation Center and this integration do not control Azure's abuse monitoring and content filtering systems. By default, Microsoft Azure retains prompts and responses for up to 30 days for abuse monitoring purposes, regardless of the store parameter setting. This retention is managed entirely by Microsoft Azure and is outside the scope of this integration.
Achieving Zero Data Retention
For true zero data retention, your organization must:
- Use this integration with Allow Conversation Data Retention field disabled (sets
store=false) - Obtain Microsoft approval for Modified Abuse Monitoring on your Azure deployment
Modified Abuse Monitoring requires:
- Enterprise Agreement (EA) or Microsoft Customer Agreement (MCA)
- Explicit approval from Microsoft and Request submission via Microsoft's Abuse Monitoring Form
Only when both conditions are met (Allow Conversation Data Retention field disabled + Modified Abuse Monitoring) will your deployment achieve zero data retention.
Important Notes
- Integration Scope: This integration provides controls for Azure's feature-based data storage through the `store` parameter. Microsoft Azure's data retention policies and practices are managed by Microsoft and governed by your Azure service agreement.
- Customer Responsibilities: Customers should review Microsoft's data retention documentation to ensure their Azure configuration meets their compliance requirements. For questions about Azure's data handling or to request Modified Abuse Monitoring, contact Microsoft Azure support.
Official Microsoft Documentation
- Azure OpenAI Data Privacy
- Azure OpenAI Responses API
- Zero Data Retention Requirements
- Data Retention Q&A (2025)
- Chat Completions and Stored Completions
How To
Import Universal Template
To use the Universal Template, you first must perform the following steps
max_allowed_packet: Defines the maximum size of a single network packet that can be read or written by the server and clients.- Recommended setting: Ensure it is configured to a value at least 25% greater than the size of your largest imported integration
innodb_log_file_size: Determines the size of InnoDB redo log files. While this parameter has been simplified in recent MySQL versions, it's still relevant for MySQL versions prior to 8.0.30- Purpose: Prevents "BLOB/TEXT data inserted in one transaction is greater than 10% of redo log size" errors that might appear on Universal Controller logs
- Recommended setting: Ensure 10% of the log file size exceeds your largest imported integration
Universal Agent Configuration (config/omss.conf):
max_msg_size: Specifies the maximum allowable size for messages. Messages exceeding the limit will not be accepted by the server.- Recommended setting: Ensure the configured value is 10% greater than your largest imported integration
Universal Controller Configuration
Universal Controller configuration file (tomcat/conf/uc.properties):
uc.universal_template.extension.maximum_bytes- Universal Controller UI Admin Panel: Administration → Properties → Universal Template Extension Maximum Bytes
- Recommended setting: Ensure the configured value is greater than the size of your largest imported integration
The deployment of an integration from Universal Controller to Universal Agents, can also be restricted by the JVM heap size of the Tomcat serving the Universal Controller. Ensure to configure the JVM heap size adequately.Things to consider.
- # of agents configured to accept the extension(s).
- # of agents configured to deploy on-registration and how many would be registering simultaneously.
- # of parallel, on-demand deployments, where deployment happens the first time the extension needs to execute on a specific agent.
This Universal Task requires the Resolvable Credentials feature. Check that the Resolvable Credentials Permitted system property has been set to true.
Import the Universal Template into your Controller:
Extract the zip file, you downloaded from the Integration Hub.
In the Controller UI, select Services > Import Integration Template option.
Browse to the “export” folder under the extracted files for the ZIP file (Name of the file will be unv_tmplt_*.zip) and click Import.
When the file is imported successfully, refresh the Universal Templates list; the Universal Template will appear on the list.
Modifications of this integration, applied by users or customers, before or after import, might affect the supportability of this integration. For more information refer to Integration Modifications.
Configure Universal Task
For a new Universal Task, create a new task, and enter the required input fields.
Integration Modifications
Modifications applied by users or customers, before or after import, might affect the supportability of this integration. The following modifications are discouraged to retain the support level as applied for this integration.
Python code modifications should not be done.
Template Modifications
General Section
"Name", "Extension", "Variable Prefix", and "Icon" should not be changed.
Universal Template Details Section
"Template Type", "Agent Type", "Send Extension Variables", and "Always Cancel on Force Finish" should not be changed.
Result Processing Defaults Section
Success and Failure Exit codes should not be changed.
Success and Failure Output processing should not be changed.
Fields Restriction Section
The setup of the template does not impose any restrictions. However, concerning the "Exit Code Processing Fields" section.Success/Failure exit codes need to be respected.
In principle, as STDERR and STDOUT outputs can change in follow-up releases of this integration, they should not be considered as a reliable source for determining the success or failure of a task.
Users and customers are encouraged to report defects, or feature requests at Stonebranch Support Desk.
Document References
This document references the following documents:
| Document Link | Description |
|---|---|
| Universal Templates | User documentation for creating, working with and understanding Universal Templates and Integrations. |
| Universal Tasks | User documentation for creating Universal Tasks in the Universal Controller user interface. |
| Azure OpenAI Responses API | Official documentation for Responses API including supported models, features, and usage examples. |
| Azure OpenAI Chat Completions | Official documentation for Chat Completions API including supported models, features, and usage examples. |
| Microsoft Azure AI Foundry | Information related to the Microsoft Azure AI Foundry platform. |
| Azure Foundry Model Catalog | Official documentation depicting all available models that can be used by using the Microsoft Foundry platform. |
Changelog
ue-azure-openai-1.1.0 (2026-01-22)
Enhancements
Added: Improving Python compatibility with future Agent versions.- Added: Importable configuration examples.
ue-azure-openai-1.0.0 (2025-12-19)
Initial Version




