Apache Kafka: Event Monitor
Disclaimer
Your use of this download is governed by Stonebranch’s Terms of Use.
Version Information
| Template Name | Extension Name | Version | Status |
|---|---|---|---|
| Apache Kafka: Event Monitor | ue-kafka-monitor | 1 (Current 1.2.0) | Fixes and new Features are introduced. |
Refer to Changelog for version history information.
Overview
Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. This Universal Extension is responsible for monitoring events (messages) from topics in Kafka, and consuming them based on filtering criteria via consumer group subscription.
Key Features
Typically, this extension can be used to monitor events from Kafka and, upon successful execution, trigger workflows or other tasks, or simply pass information related to the Kafka event within UAC.
| Feature | Description |
|---|---|
Monitor for events |
|
Authentication |
|
Other |
|
Requirements
This integration requires a Universal Agent and a Python runtime to execute the Universal Task.
| Area | Details |
|---|---|
| Python Version | Requires Python of version 3.11. Tested with the Universal Agent bundled Python distribution (python versions: 3.11.6, 3.11.9 and 3.11.13). |
| Universal Agent Compatibility |
|
| Universal Controller Compatibility | Universal Controller Version >= 7.6.0.0. |
| Network and Connectivity | Only in cases of SSL Verification:
|
| Apache Kafka Compatibility | This Integration is tested on Kafka version 3.0. Integration is expected to work with versions 2.0.0 onwards, however, this has not been tested. |
Supported Actions
There is one Top-Level action controlled by the Action Field:
- Monitor for events
Action: Monitor for events
This action is responsible for monitoring events from topics in Kafka and consuming them based on filtering criteria via consumer group subscription.
Action Output
| Output Type | Description | Examples |
|---|---|---|
| EXTENSION | In the context of a workflow, subsequent tasks can rely on the information provided by this integration as Extension Output. The extension output provides the following information:
The Extension Output for this Universal Extension is in JSON format as shown in the example. | |
| STDOUT | STDOUT provides additional information to the User. The populated content can be changed in future versions of this extension without notice. Backward compatibility is not guaranteed. | |
| STDERR | STDERR provides additional information to the User. The populated content can be changed in future versions of this extension without notice. Backward compatibility is not guaranteed. |
Configuration Examples
Example: PLAINTEXT Security Protocol Task configuration
Example of Universal Task for "PLAINTEXT" Security Protocol:
Example: SASL_SSL Security Protocol Task Configuration
Example of Universal Task for "SASL_SSL" Security Protocol:
Example: SSL Security Protocol Task Configuration
Example of Universal Task for "SSL" Security Protocol. Specifically, Client authentication over SSL takes place with an encrypted private key:
Example: Matching a Kafka Event with String Value Deserializer
In this example, the Task monitors for events with a value that contains the string "Stonebranch":
In the extension output result, the matched event's details are printed:
Example: Matching a Kafka Event with JSON Value Deserializer
In this example, the task monitors for events in with a value in JSON format, which has a list of "phoneNumbers", whose second element has an attribute type of "home":
In the extension output result, the matched event's details are printed (only the result JSON element is shown below):
Example: Matching a Kafka Event with Value Filter None
In this example, the task monitors for events and matches any Kafka event as the selected Value Filter is "None":
For more information about the Extension Output Result, refer to the Extension Output section.
Input Fields
The input fields for this Universal Extension are described below.
Field | Input type | Default value | Type | Description |
|---|---|---|---|---|
Action | Required | Monitor for events | Choice | The action performed upon the task execution. |
Security Protocol | Required | PLAINTEXT | Choice | The Security protocol is used to communicate with Kafka brokers. Valid values are:
|
Bootstrap Servers | Required | - | Text | 'host:port' string (or list of 'host:port' strings, separated by commas) that the producer should contact to bootstrap initial cluster metadata. This does not have to be the full node list. It just needs to have at least one broker that will respond to a Metadata API Request (more than one can be used, though, in case a server is down). Example with two servers: 'host1:port1,host2:port2'. |
SASL Mechanism | Optional | SCRAM–SHA–256 | Choice | The Authentication mechanism when Security Protocol is configured for SASL_SSL. Valid values are:
Required when Security Protocol is "SASL_SSL". |
SASL User Credentials | Optional | - | Credentials | Credentials for SCRAM authentication. They are comprised of:
Required when Security Protocol is "SASL_SSL". |
SSL Hostname Check | Optional | true | Boolean | Flag to configure whether SSL handshake should verify that the certificate matches the broker's hostname. Required when Security Protocol is "SASL_SSL" or "SSL". |
CA Bundle Path | Optional | - | Text | Path and file name of the Certificate Authority (CA) file to use in certificate verification. Used when it is required to locate the CA file if Security Protocol is configured for "SASL_SSL" or "SSL". |
| Client Certificate Path | Optional | - | Text | Filepath of the Client's Certificate for Client authentication over SSL in PEM format. Required when Security Protocol is "SSL". |
| Client Private Key Path | Optional | - | Text | Filepath of the Client's private key for Client authentication over SSL in PEM format. The private key can be either unencrypted or encrypted. In the case of an encrypted private key, the respective Client Private Key Password should be provided. Required when Security Protocol is "SSL". |
| Client Private Key Password | Optional | - | Credential | In case the client's private key in Client Private Key Path is encrypted, the key required for decryption. The Credentials definition should be as follows.
|
Consumer Type | Required | Consumer Group | Choice | Type of Consumer to get messages from Kafka. Available options:
|
Consumer Group | Required | - | Text | The unique name of the consumer group to join for dynamic partition assignment and to use for fetching and committing offsets. |
Topic | Required | - | Dynamic Choice | Dynamic fetched list of topics to subscribe the consumer to. The user can select the required topic from a drop-down list. |
Client Id | Optional | - | Text | This string is passed in each request to servers and can be used to identify specific server-side log entries that correspond to this client. The constructed client id seen on the server side is Client Id + "task instance id". If Client Id is not populated, the default value used is "ue-kafka-monitor-#". Example: "ue-kafka-monitor-#1635348654595881042JAYE7BKNPYVG3" |
Start from | Required | Consumer Group Offset | Choice | Controls from which point the consumption will start. Available option is:
|
Key Deserializer | Required | String | Choice | Type of key deserialization. Available options are:
|
Value Deserializer | Required | String | Choice | Type of value deserialization. Available options are:
|
Value Filter | Optional | None | Choice | Value operators to specify the criteria used to match records and stop consuming messages. If Value Deserializer is set to "Integer" or "Float", the available options are:
If Value Deserializer is set to "String", the available options are:
If Value Deserializer is set to "JSON", the available options are all that apply to the "Integer", "Float", and "String" Value Deserializer. |
Value | Optional | - | Text | The Value on which the Value Filter applies to. |
Value JSON Path | Optional | - | Text | The JSON path to locate the Value, if Value Deserializer is set to "JSON". The JSON path needs to resolve either to a number or to a string. If the JSON path results in a list of numbers or a list of strings, a Kafka message is matched if at least one element from the list matches the Value. JSON Path syntax is based on jsonpath-ng python library. For examples, please refer to the official web site. |
Show Advanced Settings | Required | False | Boolean | By checking this field, more fields are available for advanced configuration. The advanced fields are: Partition Assignment Strategy, Session Timeout (ms), Auto Offset Reset, Request Timeout (ms), Heartbeat Interval (ms), Max Partition Fetch Bytes. |
Partition Assignment Strategy | Optional | Range | Choice | Partition Assignment policies to distribute partition ownership amongst consumer instances when group management is used. Available options are:
|
Session Timeout (ms) | Optional | 10000 | Integer | Controls the time it takes to detect a consumer crash and stop sending heartbeats. If more than Session Timeout milliseconds pass without the consumer sending a heartbeat to the group coordinator, it is considered dead, and the group coordinator will trigger a rebalance of the consumer group to allocate partitions from the dead consumer to the other consumers in the group. |
Auto Offset Reset | Optional | Latest | Choice | Controls the behavior of the consumer when it starts reading a partition for which it does not have a committed offset or the committed offset is invalid (usually because the consumer was down for so long that the record with that offset was already aged out of the broker). Available options are:
|
Request Timeout (ms) | Optional | 305000 | Integer | Controls the maximum amount of time the client will wait for the response of a request. |
Heartbeat Interval (ms) | Optional | 3000 | Integer | The expected time in milliseconds between heartbeats to the consumer coordinator when using Kafka’s group management facilities. The value must be set lower than Session Timeout (ms). |
Max Partition Fetch Bytes | Optional | 1048576 | Integer | Controls the maximum number of bytes the server will return per partition. This size must be at least as large as the maximum message size the server allows, or else it is possible for the producer to send messages larger than the consumer can fetch. If that happens, the consumer can get stuck trying to fetch a large message on a certain partition. The default is 1MB. |
Environment Variables
Environment Variables can be set from the Environment Variables table in the task definition. The following environment variables can affect the behavior of the extension.
| Environment Variable Name | Description | Version Information |
|---|---|---|
UE_KAFKA_POLL_TIMEOUT_MS | Milliseconds spent waiting in poll if data is not available in the buffer. Must be an integer greater than 0. Default value is 1000. | Introduced in 1.2.0 |
| UE_KAFKA_MAX_READ_BUFFER_LENGTH | The maximum number of records to fill the buffer with from a single poll. Must be an integer greater than 0. Default value is 10. | Introduced in 1.2.0 |
| UE_KAFKA_API_VERSION_AUTO_TIMEOUT_MS | Milliseconds the Kafka client will wait to automatically detect the broker API version before raising an error. Default value is 2000. | Introduced in 1.2.0 |
Exit Codes
The exit codes for this Universal Extension are described below.
Exit Code | Status Classification Code | Status Classification Description | Status Description |
|---|---|---|---|
0 | SUCCESS | Successful Execution | SUCCESS: Successful Task execution |
1 | FAIL | Failed Execution | FAIL: < Error Description > |
10 | CONNECTION_ERROR | Bad connection data | CONNECTION_ERROR: < Error Description > |
11 | CONNECTION_TIMEOUT | Connection timed out | CONNECTION_TIMEOUT: < Error Description > |
20 | DATA_VALIDATION_ERROR | Bad input fields validation | DATA_VALIDATION_ERROR: Some of the input fields cannot be validated. See STDERR for more details |
How To
Import Universal Template
To use the Universal Template, you first must perform the following steps.
This Universal Task requires the Resolvable Credentials feature. Check that the Resolvable Credentials Permitted system property has been set to true.
Import the Universal Template into your Controller:
Extract the zip file, you downloaded from the Integration Hub.
In the Controller UI, select Services > Import Integration Template option.
Browse to the “export” folder under the extracted files for the ZIP file (Name of the file will be unv_tmplt_*.zip) and click Import.
When the file is imported successfully, refresh the Universal Templates list; the Universal Template will appear on the list.
Modifications of this integration, applied by users or customers, before or after import, might affect the supportability of this integration. For more information refer to Integration Modifications.
Configure Universal Task
For a new Universal Task, create a new task, and enter the required input fields.
Integration Modifications
Modifications applied by users or customers, before or after import, might affect the supportability of this integration. The following modifications are discouraged to retain the support level as applied for this integration.
- Python code modifications should not be done.
- Template Modifications
- General Section
- "Name", "Extension", "Variable Prefix", "Icon" should not be changed.
- Universal Template Details Section
- "Template Type", "Agent Type", "Send Extension Variables", "Always Cancel on Force Finish" should not be changed.
- Result Processing Defaults Section
- Success and Failure Exit codes should not be changed.
- Success and Failure Output processing should not be changed.
- Fields Restriction Section
The setup of the template does not impose any restrictions, However with respect to "Exit Code Processing Fields" section.- Success/Failure exit codes need to be respected.
- In principle, as STDERR and STDOUT outputs can change in follow-up releases of this integration, they should not be considered as a reliable source for determining success or failure of a task.
- General Section
Users and customers are encouraged to report defects, or feature requests at Stonebranch Support Desk.
Document References
This document references the following documents:
Name | Description |
|---|---|
User documentation for creating Universal Templates in the Universal Controller user interface. | |
User documentation for creating Universal Tasks in the Universal Controller user interface. | |
How Output Functions can be used to extract information from extension output. | |
JSON Path predicate notation. |
Changelog
ue-kafka-monitor-1.2.0 (2025-11-13)
Enhancements
- Upgraded kafka-python to version 2.1.6, bringing major enhancements in stability, performance, and compatibility, along with numerous bug fixes (#50522)
- Improved log messages (#50656)
- Added support for PLAIN mechanism under SASL authentication (#48743)
- Added validation step after initializing a consumer, for improved stability and robustness (#50547)
- Added environment variables for configuring Kafka connection and polling parameters (#50621, #50889)
- Optimized the message retrieval logic for better efficiency (#50621)
- Optimized the pipeline stages flow for better efficiency (#50621)
Fixes
- Improved consumer shutdown handling to ensure tasks terminate gracefully without leaving open connections. (#50577)
- Fixed bug where the last message was not committed when consuming multiple messages. (#50622)
ue-kafka-monitor-1.1.0 (2023-05-10)
Enhancements
Added: Support for Client authentication over SSL (#32826)
Fixes
Fixed: Improve robustness of application (#32947)
ue-kafka-monitor-1.0.2 (2022-03-30)
Enhancements
Fixed: Change of Template Icon
ue-kafka-monitor-1.0.1 (2021-12-10)
Enhancements
Added: Enable Always Cancel On Force Finish in Universal Template (#26607)
Fixes
Fixed: Minor bugfixes (#26555, #26574)
ue-kafka-monitor-1.0.0 (2021-11-02)
Added: Initial Release (#25615)