Azure OpenAI

Azure OpenAI

Disclaimer

Your use of this download is governed by Stonebranch’s Terms of Use.

Version Information

Template Name

Extension Name

Version

Status

Azure OpenAI

ue-azure-openai

1 (Current 1.1.0)

Fixes and new Features are introduced

Refer to Changelog for version history information.

Overview

The Azure OpenAI Integration brings the power of enterprise-grade large language models into your UAC workflows, enabling intelligent automation that can understand, generate, and transform content. By connecting your automation workflows to Azure-hosted AI models, you can build solutions that combine the reliability and orchestration capabilities of UAC with the reasoning and language capabilities of modern LLMs.

This integration opens up a wide range of automation possibilities: from analyzing logs and generating human-readable summaries of complex system states, to intelligently routing and classifying data, transforming unstructured information into structured formats, or even making context-aware decisions within your workflows.

The integration works with both Azure OpenAI Service and Azure AI Foundry platforms, giving you flexibility in how you deploy and manage your AI models. Prompts can be enriched by files or UAC Variables and outputs can be persisted for reuse across multiple tasks and workflows, enabling you to chain AI operations together or share insights across your automation ecosystem.


Key Features

Feature

Description

Ask AI

  • Sends Prompts to an LLM model enabled by Azure OpenAI and Azure AI Foundry APIs.
  • Built on the OpenAI V1 API standard for long-term compatibility and seamless model upgrades.
  • Works with both Chat Completions API and the newer Responses API

Response Output

Stores the LLM response as a local file.

Prompt EnrichmentEnriches prompts with data from files and UAC Variables
Configurable data retention controls
Configurable data retention controls via Allow Conversation Data Retention parameter, with support for zero data retention when used with Modified Abuse Monitoring. For more information refer to Data Retention and Privacy

‎ ‎

Requirements

This integration requires a Universal Agent and a Python runtime to execute the Universal Task.

Area

Details

Python Version

Requires Python 3.11, Tested with Agent bundled python distribution.

Universal Agent Compatibility

Compatible with Universal Agent for Windows x64 and version >= 7.9.0.0.

Compatible with Universal Agent for Linux and version >= 7.9.0.0.

Universal Controller Compatibility

Universal Controller Version >= 7.9.0.0.

Network and Connectivity

Network Connectivity towards Azure OpenAI is required.

‎ ‎ 

Supported Actions


Ask AI:

  • Ask AI (Chat Completions API) - Sends a System and a User prompt to models using the Chat Completions API. Chat Completions API is the traditional approach and has bigger support on supported models (including some older cheaper models, and non-OpenAI models that are compatible with OpenAI API spec like Llama-3.3-70B-Instruct)
  • Ask AI (Responses API) - Sends a System and a User prompt to models using the Responses API. It's in a way the evolution of Chat Completions API with less coverage on models (some older OpenAI modules and 3rd party non-OpenAI models are not supported)

As of current implementation, both APIs are used in a stateless mode. As a result, they work similarly and in terms of functionality present the same capabilities.


Configuration examples

Example: Ask AI (Chat Completions API) with Service Principal authentication and save the response to a Local Path

In this example, the user configures a Universal Task to invoke the Chat Completions API using Azure Service Principal authentication and the gpt-4o model. The task is configured to capture the AI-generated response in the EXTENSION output rather than printing it to STDOUT. Finally, the user enables the display of token usage metadata, providing visibility into the resources consumed during response generation.

Example: Ask AI (Responses API) with API Key authentication and fetch prompts from files

In this example, the user configures a Universal Task to invoke the Responses API using API Key authentication. Instead of defining the prompt inline, the task retrieves one or more prompts from files stored on disk. The user has also configured the task to display the entire conversation in both STDOUT and the EXTENSION output.

Example: Ask AI (Responses API) with API Key authentication and fetch system prompt from a UAC Variable

In this example, the user configures a Universal Task to invoke the Responses API using API Key authentication. The system prompt is dynamically retrieved from the ops_task_name UAC Variable. The task is configured to print the AI-generated response to STDOUT and EXTENSION output

Example: Ask AI (Chat Completions API) with API Key authentication and fetch the prompts from UAC Scripts

In this example, the user configures a Universal Task to invoke the Chat Completions API using API Key authentication. Both the system prompt and the user prompt (conversation thread) are sourced from UAC Scripts. The task is also configured to display the entire conversation in STDOUT as well as in the Extension Output.

Example: Ask AI (Chat Completions API) with API Key using Azure Foundry endpoint and a 3rd-party Model

In this example, the user configures a Universal Task to invoke the Chat Completions API using API Key authentication against an Azure Foundry endpoint. Instead of an OpenAI model, the task is configured to use the Llama-3.3-70B-Instruct third-party model. The task is configured to display the latest response in STDOUT as well as in the EXTENSION Output.

Action Output

Output Type

Description

Examples

EXTENSION

The extension output provides the following information:

  • exit_code, status_description: General info regarding the Task Instance execution. For more information, refer to the exit code table.

  • invocation.fields: The Task Instance configuration.

  • result.response: The AI-generated response. 

  • result.complete_conversation: Type : List of JSON Objects with "role" and "content" attributes. The full conversation between the user and the AI model split into the "system", "user", and "asisstant" roles.
  • result.metadata: Type : JSON Object. Information related to the prompt execution, generated tokens, finish_reason, etc. 

  • result.errors:List of errors that might have occurred during execution.

Successful Execution
{
    "exit_code": 0,
    "status_description": "Task executed successfully",
    "invocation": {
        "extension": "ue-azure-openai",
        "version": "1.0.0",
        "fields": {
            "action": "Ask AI (Chat Completions API)",
            "azure_credentials": { ... },
            "authentication_method": "API Key",
            "azure_endpoint": "https://resource-name.openai.azure.com/openai/v1/",
            "model_dynamic": "",
            "model_manual": "gpt-4o",
            "system_prompt_source": "Text Field",
            "system_prompt_text": "You are a helpful assistant.",
            "system_prompt_script": null,
            "conversation_thread_source": "Text Field",
            "conversation_thread_text": "What is 2 + 2?",
            "conversation_thread_script": null,
            "stdout_output": ["Show Latest Response Only"],
            "extension_output": ["Show Latest Response Only", 
                "Show Entire Conversation", 
                "Token Usage and Metadata"
            ],
            "azure_tenant_id": "",
            "azure_subscription_id": "",
            "resource_group": "",
            "advanced_options": false,
            "files": null,
            "response_format_type": "Text",
            "response_format_json_schema": null,
            "save_options": "-- None --",
            "save_source": "",
            "save_local_path": "",
            "dry_run": false,
            "temperature": 1.0,
            "max_tokens": 1000,
            "top_p": 1.0,
            "frequency_penalty": 0.0,
            "presence_penalty": 0.0,
            "allow_conversation_retention": false
        }
    },
    "result": {
        "response": "2 + 2 = 4",
        "metadata": {
            "prompt_tokens": 25,
            "completion_tokens": 8,
            "total_tokens": 33,
            "finish_reason": "stop"
        },
        "complete_conversation": [
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",
                "content": "What is 2 + 2?"
            },
            {
                "role": "assistant",
                "content": "2 + 2 = 4"
            }
        ]
    }
}
Failed Execution
{
    "exit_code": 1,
    "status_description": "API Request Failed: Chat completion failed: Error code: 400 - 
        {'error': {'code': 'unknown_model','message': 'Unknown model: gpt-9', 
        'details': 'Unknown model: gpt-9'}}",
    "invocation": {
        "extension": "ue-azure-openai",
        "version": "1.0.0",
        "fields": {
            "save_options": "-- None --",
            "stdout_output": "Show Latest Response Only",
            "azure_endpoint": "https://resource-name.openai.azure.com/",
            "model_manual": "gpt-9",
            "presence_penalty": null,
            "azure_subscription_id": null,
            "model_dynamic": null,
            "response_format_type": "Text",
            "dry_run": false,
            "save_local_path": null,
            "extension_output": [
                "Show Latest Response Only",
                "Show Entire Conversation",
                "Token Usage and Metadata"
            ],
            "allow_conversation_retention": false,
            "resource_group": null,
            "max_tokens": 1000,
            "conversation_thread_text": "What is 2 + 2?",
            "save_source": null,
            "conversation_thread_source": "Text Field",
            "frequency_penalty": null,
            "action": "Ask AI (Chat Completions API)",
            "authentication_method": "API Key",
            "system_prompt_text": "You are a helpful assistant.",
            "top_p": 1.0,
            "azure_tenant_id": null,
            "advanced_options": false,
            "system_prompt_source": "Text Field",
            "temperature": 1.0,
            "azure_credentials": { ... }
        }
    },
    "result": {
        "errors": [ 
            "API Request Failed: Chat completion failed: Error code: 400 - 
            {'error': {'code': 'unknown_model', 'message': 'Unknown model: gpt-9', 
            'details': 'Unknown model: gpt-9'}}"
        ]
    }
}

STDOUT

When configured, shows the "Latest Response Only" or the "Entire Conversation" with the LLM in a human readable way.


STDERR

Shows the logs from the Task Instance execution. The verbosity is controlled by the Task configuration Log Level.


Importable Configuration Examples

This integration provides importable configuration examples along with their dependencies, grouped as Use Cases to better describe end to end capabilities. 

Those examples aid in allowing task authors to get more familiar with the configuration of tasks and related Use Cases. Such tasks should be imported in a Test system and should not be used directly in production.

Initial Preparation Steps

  • STEP 1: Go to Stonebranch Integration Hub and download  "Azure OpenAI", "UAC Utility: Email Monitor", and "ServiceNow: Incident" integrations.
  • STEP 2: Locate and import the above integrations to the target Universal Controller. For more information refer to the How To section in this document.
  • STEP 3: Extract the Azure OpenAI archive; Inside the directory named "configuration_examples" you will find a list of definition zip files. Upload them one by one in the order given below, by using the "Upload" functionality of the Universal Controller
    • 1_variables.zip
    • 2_credentials.zip
    • 3_scripts.zip
    • 4_tasks.zip
    • 5_monitors.zip
    • 6_workflows.zip
    • 7_triggers.zip
  • STEP 4: Update the uploaded UAC Credential entities for both Azure OpenAI, Email Monitor, and ServiceNow Incident.
  • STEP 5: Update the UAC global variables introduced with the 1_variables.zip file. Their names are prefixed with "ue_azure_openai". Review the descriptions of the variables as they include information on how should be populated.


  • The order indicated above ensures that the dependencies of the imported entities need to be uploaded first. 
  • All imported entities are prefixed with UC1: Azure OpenAI.


How to "Upload" Definition Files to a Universal Controller

The "Upload" functionality of Universal Controller allows Users to import definitions exported with the "Download" functionality.


Login to Universal Controller and:

  • STEP 1: Click "Tasks""All Tasks"
  • STEP 2: Right click on the top of the column named "Name"
  • STEP 3: Click "Upload..."

In the pop-up "Upload..." dialogue:

  • STEP 1: Click "Choose File".
  • STEP 2: Select the appropriate zip definition file and click "Upload".
  • STEP 3: Observe the Console for possible errors.

Use Case 1: Email Classification, Output Transformation & Automated Incident Creation on ServiceNow

Description

This workflow uses Azure OpenAI to analyze incoming emails and send them to the right process. The Email Monitor integration watches a mailbox and creates a Universal Event for each new message, triggering a UAC Workflow. Azure OpenAI UAC Tasks then classify the email as a purchase request, an infrastructure IT issue, or an undefined case that needs human review.

If the email is an infrastructure IT request, another Azure OpenAI Task converts the email content into structured JSON, which the ServiceNow Incident integration uses to automatically create an Incident. This demonstrates end-to-end branching and data transformation in UAC powered by Azure OpenAI.​

The tasks configured demonstrate the following capabilities among others:

  • Capability to trigger a task or a workflow based on received Emails.
  • Propagation of useful information (like Email attributes) to downstream tasks.
  • Utilize Azure OpenAI Tasks to classify information and transform it into structured formats.
  • Handle business cases where the automatic creation of ServiceNow Incidents is necessary.

The components of the solution are described below:



  1. Represents the receipt of an e-mail to a Mailbox.
  2. "UC1: Azure OpenAI - Monitor Email from Microsoft Server (Event Producer)" - Subscribes to Microsoft Server, monitors emails, saves attachments and emails on the Agent's filesystem and sends Universal Event for each monitored email.
  3. "UC1: Azure OpenAI - Universal Monitor: Wait for Email (Event Consumer)" - Monitors the Universal Events published from "UC1: Azure OpenAI - Monitor Email from Microsoft Server (Event Producer)"
  4. "UC1: Azure OpenAI - Trigger Workflow" - Triggers the configured workflow "UC1: Azure OpenAI - Email Classification & Transformation" when the Monitor condition on (3) is true.
  5. "UC1: Azure OpenAI - Email Classification Task" - Calls gpt-4 to classify the contents of the received e-mail with one of the following values "purchase", "infrastructure", and "undefined".
  6. "UC1: Azure OpenAI - Email Transformation to Incident Task" - Transforms the contents of the received e-mail into a "ServiceNow Incident"-compatible JSON.
  7. "UC1: Azure OpenAI - Create a ServiceNow Incident Task" - Creates a ServiceNow Incident using the JSON produced in the previous step.
  8. "UC1: Azure OpenAI - Forward Purchase Request To Finance" - Task representing actions related to forwarding a purchase request to a Finance department.
  9. "UC1: Azure OpenAI - Represents Manual Actions" - Task representing actions related to introducing human intervention in cases that were unable to be classified by the "Azure OpenAI: Email Classification Task".

How to Run

Execution Steps:

  • STEP 1: Enable Trigger "UC1: Azure OpenAI - Trigger Workflow". This automatically starts "UC1: Azure OpenAI - Universal Monitor: Wait for Email (Event Consumer)" and "UC1: Azure OpenAI - Monitor Email from Microsoft Server (Event Producer)"
  • STEP 2: Send an e-mail to the address monitored by "UC1: Azure OpenAI - Universal Monitor: Wait for Email (Event Consumer)".
  • STEP 3: Review the outputs of (7,8,9) depending on the classification produced by (5).

Expected Results:

  • Mail is received, successfully monitored, and a workflow is launched.
  • Task "UC1: Azure OpenAI - Email Classification Task" successfully produces a JSON-structured classification with one of the following values: "purchase", "infrastructure", "undefined".
  • If the classification label is "infrastructure", Task "UC1: Azure OpenAI - Email Transformation to Incident Task" launches successfully.

Ready-to-use e-mail messages:

Infrastructure-related Incident Query
Dear Support Team,

I hope you are doing well.

I am writing to kindly ask for your assistance, as I am currently experiencing an issue accessing the VPN. 
Unfortunately, I am unable to connect as expected, and this is preventing me from accessing the systems I normally use for my work.
So far, I have tried the following steps:

- Restarting my computer
- Disconnecting and reconnecting the VPN
- Checking my internet connection

However, the issue still persists.
I would greatly appreciate it if you could please advise me on what steps I should take next, or let me know if you require any additional information from my side.

Thank you very much for your time and support.

Kind regards,
John Doe
Purchase Request
Dear Support Team,

I hope you are doing well.

I am writing to kindly request your assistance regarding the purchase of a laptop for a new team member who will be joining us shortly.
Please let me know if there is any information you require from my side.

Kind regards,
John Doe
Uncategorized Query Requiring Human Intervention
Hello there,

Big news from CompanyName Cloud Services 👋

If you’re building products that rely on fast, reliable customer communication, we’ve got something you don’t want to miss.

Meet ConnectFlow™

Our all-in-one messaging and automation platform helps teams send SMS, email, and notifications at scale — without worrying about infrastructure, retries, or delivery issues.

Why teams are switching to ConnectFlow™:

⚡ Send messages globally in seconds

🔐 Enterprise-grade security and compliance

🔁 Built-in retries, monitoring, and analytics

🧩 Easy-to-use APIs and no-code options

📈 Designed to grow with you, from MVP to millions of users

Thousands of companies already trust ConnectFlow™ to power password resets, alerts, onboarding messages, and critical notifications — and now it’s your turn.

👉 Get started in minutes
👉 Free trial included
👉 No credit card required

Have questions? Our solutions team is ready to help you design the perfect setup for your use case.

Thanks for building with us,
The CompanyName Team

You’re receiving this email because you signed up for updates from CompanyName.
Prefer fewer emails? You can update your preferences or unsubscribe at any time.

© 2026 CompanyName, Inc. All rights reserved.

Input Fields

NameTypeDescriptionVersion Information
ActionChoice

The action performed upon the task execution.

  • Ask AI (Chat Completions API)

  • Ask AI (Responses API)

Introduced in 1.0.0

Azure Open AI / AI Foundry EndpointTextThe project endpoint provided by the Azure platform. The Foundry endpoint enables access to a wider range of models.Introduced in 1.0.0
Authentication MethodChoice

The authentication method used to connect to the Azure OpenAI / Foundry platform.

Authentication methods currently supported:

  • Service Principal
  • API Key
Introduced in 1.0.0
Azure CredentialsCredentials

The Azure Credentials to access the Azure OpenAI / Foundry platform.

Incase of Authentication Method = 'Service Principal' the definition of the credential should be as follows:

  • Application (client) ID: as Runtime User credential attribute.
  • Application (client) Secret : as Runtime Password credential attribute

When Authentication Method = 'API Key' the API Key needs to be placed in the Token field.

Introduced in 1.0.0
Azure Tenant IDText

The Azure AD Tenant ID.

This field is visible when "Service Principal" Authentication Method is selected.

Introduced in 1.0.0
Azure SubscriptionDynamic Choice

The Azure Subscription ID.

This field is visible when "Service Principal" Authentication Method is selected.

Introduced in 1.0.0
Resource GroupDynamic Choice

The Azure Resource group related to the Azure OpenAI Resource.

This field is visible when "Service Principal" Authentication Method is selected.

Introduced in 1.0.0
ModelText / Dynamic Choice

The AI Model (deployment) to be used.

It is converted to a Dynamic Choice Field when "Service Principal" Authentication Method is selected.

Introduced in 1.0.0
System Prompt SourceChoice

System Prompt sources available:

  • Text Field
  • Prompt Library (UAC Script)
Introduced in 1.0.0
System PromptText / Script

The system prompt to be sent to the AI model. 

This field supports Dynamic Placeholders. Users can type the following template functions and place them anywhere within the System Prompt or the Conversation Thread fields, constructing the prompt in a dynamic way: 

  • {{ fetch_file('/path/to/file') }} - Loads the content of a file into the prompt.
  • {{ fetch_uac_variable('uac_variable_name') }} - Loads the content of a UAC Variable into the prompt.

It is a Script input field when"Prompt Library (UAC Script)" System Prompt Source is selected, else it is Text.

Dynamic Placeholders work with both Text and Script field types.

Introduced in 1.0.0
Conversation Thread SourceChoice

Conversation Thread sources available:

  • Text Field
  • Prompt Library (UAC Script)
Introduced in 1.0.0
Conversation ThreadText / Script

The Conversation Thread (user prompt) to be sent to the AI model.

This field supports Dynamic Placeholders. Users can type the following template functions and place them anywhere within the System Prompt or the Conversation Thread fields, constructing the prompt in a dynamic way: 

  • {{ fetch_file('/path/to/file') }} - Loads the content of a file into the prompt.
  • {{ fetch_uac_variable('uac_variable_name') }} - Loads the content of a UAC Variable into the prompt.

It is  a Script input field when "Prompt Library (UAC Script)" Conversation Thread Source is selected, else it is Text Dynamic Placeholders work with both Text and Script field types.

Introduced in 1.0.0
Response Format Choice

Provide the capability to select the output format of the AI response.

Response Formats available:

  • Text
  • JSON (Prompt Described)
  • JSON Schema (API Enforced)

For more information please read Structured Outputs

Introduced in 1.0.0
JSON Schema ResponseScript

Provide the JSON Schema to format the AI model's output.

This field is visible when "JSON Schema (API Enforced)" Response Format is selected.

For more information please read Structured Outputs

Introduced in 1.0.0
Save OptionsChoice 

Specify the content to be saved from the response.

The following Save Options are available:

  • Save Entire Conversation
  • Save Latest Response Only
Introduced in 1.0.0
Save ToChoice

Specify the save location of the response.

The following options are available:

  • Local Path

This field is visible when either "Save Entire Conversation" or "Save Latest Response Only" Save Options is selected.

Introduced in 1.0.0
Local PathText

Provide the file path to save the response. If the file does not exist, the extension will attempt to create it. If the file already exists it will be overwritten.


This field is visible when either "Save Entire Conversation" or "Save Latest Response Only" Save Options is selected.

Introduced in 1.0.0
Dry RunCheckboxValidate inputs without calling the Azure API. Useful for troubleshooting and testing purposes.Introduced in 1.0.0
Advanced OptionsCheckbox

Enable advanced options.

Introduced in 1.0.0
TemperatureFloat

Control the Sampling Temperature of the response. 

This field is visible when Advanced Options is selected.

Introduced in 1.0.0
Top PFloat

Control the Nucleous Sampling of the response. Not all models support this value and may produce an error when set. In addition, between models, the accepted range is different. 

This field is visible when Advanced Options is selected.

Introduced in 1.0.0
Frequency PenaltyFloat

Control the repetition of the response.  Not all models support providing this value and may produce an error when set. In addition, between models, the accepted range is different. 

This field is visible when Advanced Options is selected.

Introduced in 1.0.0
Presence PenaltyFloat

Encourage new Topics in the response. Not all models support providing this value and may produce an error when set. In addition, between models, the accepted range is different. 

This field is visible when Advanced Options is selected.

Introduced in 1.0.0
Max TokensInteger

Specify the maximum number of tokens to be generated by the LLM. This field controls the length of the response. If the response attempts to exceed the limit, it is abruptly stopped.

This field is visible when Advanced Options is selected.

Introduced in 1.0.0
Allow Conversation Data RetentionCheckbox

Controls whether Azure stores conversation data for Stored Completions and Responses API features. When disabled, sets store=false to prevent feature-based data retention. By default it is disabled. This field is visible when Advanced Options is selected.

Note: This parameter does not control Azure's abuse monitoring system. For complete zero data retention, see the Data Retention and Privacy section Data Retention and Privacy

Introduced in 1.0.0
STDOUT OutputChoice

Specify the information to be displayed in STDOUT.

The following options are available:

  • Show Latest Response Only
  • Show Entire Conversation
Introduced in 1.0.0
EXTENSION OutputMultiple Choice

Specify the information to be displayed in the EXTENSION output.

The following options are available:

  • Show Latest Response Only
  • Show Entire Conversation
  • Token Usage and Metadata
Introduced in 1.0.0

‎ 

Structured Outputs


The Azure OpenAI Integration offers three ways to control how the AI model formats its responses. This guide explains each option and helps you choose the right one for your needs.

Text

Plain text responses with no formatting constraints. The model responds naturally in plain text. You describe what you want in your prompt, and the model does its best to follow your instructions. Best used for natural language responses without structure.

However, an "N shot inference strategy" in your prompt can  produce good results for a structured response. This mode is helpful when working with old cheaper models when JSON structure is not API provided. 


System Prompt Example
You are an assistant that replies in specific format as mentioned in the examples below
Conversation Thread Example
[User] 
Tokyo is experiencing a beautiful spring day with clear blue skies and gentle breezes flowing through the streets. The temperature has reached a comfortable 22 degrees, making it perfect for 
outdoor activities. 

[Assistant] 
{ "city": "Tokyo", "temperature": 22}
 

[User] 
New York is facing a harsh winter storm today with heavy snowfall blanketing the entire metropolitan area. Public transportation is experiencing major delays and cancellations. The temperature has plummeted to
a frigid 5 degrees below zero, making it one of the coldest days this season. 

[Assistant] 
{ "city": "New York", "temperature": -5}

[User] 
London is having a typical autumn day with overcast skies and light drizzle throughout the morning hours. The temperature is holding steady at 15 degrees, which is quite typical for this time of year. 

[Assistant] 
{ "city": "London", "temperature": 15}


[User] 
Sydney is experiencing an intense summer heatwave with scorching temperatures breaking records across the region.  Authorities have issued heat warnings advising people to stay hydrated and avoid prolonged sun exposure. The temperature
has soared to an extreme 35 degrees, making it one of the hottest days of the summer. 
Extension Output Example 
{ 
 ...
 "result": {
   "response": {
      "city": "Sydney",
      "temperature": 35
   },
   "metadata": {
   "prompt_tokens": 276,
   "completion_tokens": 14,
   "total_tokens": 290,
   "finish_reason": "stop"
}


JSON (Prompt Described)

The model returns JSON, but you describe the structure in your prompt. The model guarantees valid JSON syntax, but you must describe what fields and structure you want in your prompt. The model will try to follow your description, however there's no strict enforcement. This strategy shows very good and consistent results for majority of the cases.

How it works:

  • The API ensures the response is valid JSON (proper brackets, quotes, commas)
  • You describe the desired structure in your prompt
  • The model attempts to match your description, but may not strictly adhere to it.

Requirements:

  • Your prompt must mention JSON - The API requires this. Without it, you'll get an error.
  • You should describe the desired structure in your system prompt


System Prompt Example
You are a helpful assistant providing information strictly in JSON format.

The JSON format should be:

{ "City": "<city_name>", "temperature": <number>}
Conversational Thread Example
[User]

Input text is : 
"New York is facing a harsh winter storm today with heavy snowfall blanketing the entire metropolitan area. The streets are covered in thick layers of snow, and visibility is significantly reduced due to the ongoing blizzard 
conditions. Strong winds are creating dangerous wind chills, and residents are advised to stay indoors unless absolutely necessary. Public transportation is experiencing major delays and cancellations. The temperature has plummeted to
a frigid 5 degrees below zero, making it one of the coldest days this season."
Extension Output Example 
{ 
 ...
 "result": {
   "response": {
      "city": "New York",
      "temperature": -5
   },
   "metadata": {
   "prompt_tokens": 276,
   "completion_tokens": 14,
   "total_tokens": 290,
   "finish_reason": "stop"
}


JSON Schema (API Enforced)

You provide a detailed JSON Schema, and the API guarantees the model's response will match it exactly - correct fields, correct types, no extras, no omissions.


How it works:
  • You provide a formal JSON Schema definition
  • The API enforces 100% compliance
  • No prompt engineering needed - the schema does the work

Limitations

  • Only supported on newer Azure OpenAI models

Additional Information:


System Prompt Example
You are a helpful assistant replying with a predefined JSON structure
Conversational Thread Example
[User]

New York is facing a harsh winter storm today with heavy snowfall blanketing the entire metropolitan area. The streets are covered in thick layers of snow, and visibility is significantly reduced due to the ongoing blizzard
conditions. Strong winds are creating dangerous wind chills, and residents are advised to stay indoors unless absolutely necessary. Public transportation is experiencing major delays and cancellations. The temperature has plummeted to
a frigid 5 degrees below zero, making it one of the coldest days this season.
JSON Schema Response Script Example
{
  "type": "object",
  "properties": {
    "city": {
      "type": "string",
      "description": "City Name"
    }, 
    "temperature": {
      "type": "number",
      "description": "Temperature of the City, Number"
    } 
  },
  "required": ["city", "temperature"],
  "additionalProperties": false
}
Extension Output Example 
{ 
 ...
 "result": {
   "response": {
      "city": "New York",
      "temperature": -5
   },
   "metadata": {
   "prompt_tokens": 276,
   "completion_tokens": 14,
   "total_tokens": 290,
   "finish_reason": "stop"
}


Model Compatibility


Microsoft does not provide a comprehensive compatibility matrix for models, specifically to the capability of JSON Structured response.

It is suggested that you test your desired configuration covering some edge cases, observe errors if any, and make sure that output meet your needs.

Environment Variables

Environment Variables can be set from the Environment Variables task definition table. The following environment variables can affect the behavior of the extension.

Environment Variable NameDescriptionVersion Information
UE_HTTP_TIMEOUT

Specifies the timeout (in seconds) for HTTP requests made by the Task Instance. A higher value allows for slower responses, while a lower value enforces stricter time constraints. If not set, a default of 60 seconds is used. When a timeout happens, the Task Instance ends in failure.

Introduced in 1.0.0

Exit Codes

Exit CodeStatusStatus DescriptionMeaning
0Success“Task executed successfully.“Successful Execution
1Failure“Execution Failed: <<Error Description>>”Generic Error. Raised when not falling into the other Error Codes.
20

Failure

“Data Validation Error: <<Error Description>>“

Input fields validation error.


STDOUT and STDERR


STDOUT is used for displaying Job information. It is controlled by STDOUT Options field. STDERR provides additional information to the user. The level of detail is tuned by the Log Level Task Definition field.

Backward compatibility is not guaranteed for the content of STDOUT/STDERR and can be changed in future versions without notice


Data Retention and Privacy

Overview

The Azure OpenAI Integration provides configurable data retention controls through the Allow Conversation Data Retention parameter. This section explains how data retention works, what the integration controls, and what additional configuration may be required for your organization's compliance needs.

What the Integration Controls

When the Allow Conversation Data Retention parameter is disabled (default behavior), the integration sets the store parameter to false in all API calls to Azure OpenAI Service and Azure AI Foundry. This disables the following Azure-managed storage features:

  • Stored Completions: Input-output pairs are not stored for evaluation or fine-tuning purposes
  • Responses API State: Response data is not retained beyond the immediate request/response cycle

What the Integration does not control

Stonebranch Universal Automation Center and this integration do not control Azure's abuse monitoring and content filtering systems. By default, Microsoft Azure retains prompts and responses for up to 30 days for abuse monitoring purposes, regardless of the store parameter setting. This retention is managed entirely by Microsoft Azure and is outside the scope of this integration.

Achieving Zero Data Retention

For true zero data retention, your organization must:

  • Use this integration with Allow Conversation Data Retention field disabled (sets store=false)
  • Obtain Microsoft approval for Modified Abuse Monitoring on your Azure deployment

Modified Abuse Monitoring requires:

Only when both conditions are met (Allow Conversation Data Retention field disabled + Modified Abuse Monitoring) will your deployment achieve zero data retention.

Important Notes

  • Integration Scope: This integration provides controls for Azure's feature-based data storage through the `store` parameter. Microsoft Azure's data retention policies and practices are managed by Microsoft and governed by your Azure service agreement.
  • Customer Responsibilities: Customers should review Microsoft's data retention documentation to ensure their Azure configuration meets their compliance requirements. For questions about Azure's data handling or to request Modified Abuse Monitoring, contact Microsoft Azure support.

Official Microsoft Documentation 

How To

Import Universal Template

To use the Universal Template, you first must perform the following steps

For large Universal Templates like this one,  some parameters need to be adjusted, particularly for MySQL, Universal Controller and Universal Agent: (tomcat/conf/uc.properties)
MySQL (my.cnf)
max_allowed_packet: Defines the maximum size of a single network packet that can be read or written by the server and clients.
  • Recommended setting: Ensure it is configured to a value at least 25% greater than the size of your largest imported integration
innodb_log_file_size: Determines the size of InnoDB redo log files. While this parameter has been simplified in recent MySQL versions, it's still relevant for MySQL versions prior to 8.0.30
  • Purpose: Prevents "BLOB/TEXT data inserted in one transaction is greater than 10% of redo log size" errors that might appear on Universal Controller logs
  • Recommended setting: Ensure 10% of the log file size exceeds your largest imported integration

Universal Agent Configuration (config/omss.conf):
max_msg_size: Specifies the maximum allowable size for messages. Messages exceeding the limit will not be accepted by the server.
  • Recommended setting: Ensure the configured value is 10% greater than your largest imported integration

Universal Controller Configuration
Universal Controller configuration can restrict import of an integration if the extension maximum bytes is not properly set. This can be configured in the following ways:
  1. Universal Controller configuration file (tomcat/conf/uc.properties): uc.universal_template.extension.maximum_bytes

  2. Universal Controller UI Admin Panel: Administration → Properties → Universal Template Extension Maximum Bytes
  • Recommended setting: Ensure the configured value is greater than the size of your largest imported integration

The deployment of an integration from Universal Controller to Universal Agents, can also be restricted by the JVM heap size of the Tomcat serving the Universal Controller. Ensure to configure the JVM heap size adequately.Things to consider.
  • # of agents configured to accept the extension(s).
  • # of agents configured to deploy on-registration and how many would be registering simultaneously.
  • # of parallel, on-demand deployments, where deployment happens the first time the extension needs to execute on a specific agent.
  1. This Universal Task requires the Resolvable Credentials feature. Check that the Resolvable Credentials Permitted system property has been set to true.

  2. Import the Universal Template into your Controller:

    1. Extract the zip file, you downloaded from the Integration Hub.

    2. In the Controller UI, select Services > Import Integration Template option.

    3. Browse to the “export” folder under the extracted files for the ZIP file (Name of the file will be unv_tmplt_*.zip) and click Import.

    4. When the file is imported successfully, refresh the Universal Templates list; the Universal Template will appear on the list.

Modifications of this integration, applied by users or customers, before or after import, might affect the supportability of this integration. For more information refer to Integration Modifications.

Configure Universal Task

For a new Universal Task, create a new task, and enter the required input fields.

Integration Modifications

Modifications applied by users or customers, before or after import, might affect the supportability of this integration. The following modifications are discouraged to retain the support level as applied for this integration.

  • Python code modifications should not be done.

  • Template Modifications

    • General Section

      • "Name", "Extension", "Variable Prefix", and "Icon" should not be changed.

    • Universal Template Details Section

      • "Template Type", "Agent Type", "Send Extension Variables", and "Always Cancel on Force Finish" should not be changed.

    • Result Processing Defaults Section

      • Success and Failure Exit codes should not be changed.

      • Success and Failure Output processing should not be changed.

    • Fields Restriction Section
      The setup of the template does not impose any restrictions. However, concerning the "Exit Code Processing Fields" section.

      1. Success/Failure exit codes need to be respected.

      2. In principle, as STDERR and STDOUT outputs can change in follow-up releases of this integration, they should not be considered as a reliable source for determining the success or failure of a task.

Users and customers are encouraged to report defects, or feature requests at Stonebranch Support Desk.


Document References

This document references the following documents:

Document LinkDescription
Universal TemplatesUser documentation for creating, working with and understanding Universal Templates and Integrations.
Universal TasksUser documentation for creating Universal Tasks in the Universal Controller user interface.
Azure OpenAI Responses APIOfficial documentation for Responses API including supported models, features, and usage examples.
Azure OpenAI Chat CompletionsOfficial documentation for Chat Completions API including supported models, features, and usage examples.
Microsoft Azure AI FoundryInformation related to the Microsoft Azure AI Foundry platform.
Azure Foundry Model CatalogOfficial documentation depicting all available models that can be used by using the Microsoft Foundry platform.

‎ ‎ 

Changelog

ue-azure-openai-1.1.0 (2026-01-22)

Enhancements

  • Added: Improving Python compatibility with future Agent versions.
  • Added: Importable configuration examples.

ue-azure-openai-1.0.0 (2025-12-19)

Initial Version