Observability Start-Up Guide
Introduction
What is Observability?
In the ever-evolving landscape of distributed system operations, ensuring the reliability, performance, and scalability of complex applications has become increasingly more difficult. System Observability has emerged as a critical practice that empowers IT organizations to effectively monitor and gain deep insights into the inner workings of their software systems. By systematically collecting and analyzing data about applications, infrastructure, and user interactions, observability enables teams to proactively identify, diagnose, and resolve issues, ultimately leading to enhanced user experiences and operational efficiency.
What is OpenTelemetry?
OpenTelemetry is an open-source project that standardizes the collection of telemetry data from software systems, making it easier for organizations to gain holistic visibility into their environments. By seamlessly integrating with various programming languages, frameworks, and cloud platforms, OpenTelemetry simplifies the instrumentation of applications, allowing developers and operators to collect rich, actionable data about their systems' behavior. The adoption of OpenTelemetry by software vendors and Application Performance Monitoring (APM) tools represents a significant shift in the observability landscape. OpenTelemetry has gained substantial traction across the industry due to its open-source, vendor-neutral approach and its ability to standardize telemetry data collection.
Many software vendors have started incorporating OpenTelemetry into their frameworks and libraries. Major cloud service providers like AWS, Azure, and Google Cloud have also embraced OpenTelemetry. In addition, many APM tools have integrated OpenTelemetry into their offerings. This integration allows users of these APM solutions to easily collect and visualize telemetry data from their applications instrumented with OpenTelemetry. It enhances the compatibility and flexibility of APM tools, making them more versatile in heterogeneous technology stacks.
Solution Architecture (Component Description)
Getting Started
Introduction
The following will provide a minimal setup to get started with Observability for Universal Automation Center.
This set-up is based on widely used Open Source tools.
This set-up is not intended for production use. To use the provided set-up in a production environment, further security configurations have to be applied.
This set-up allows collecting Metrics and Trace data from the Universal Automation Center. The collected Metrics data is stored in Prometheus for analysis in Grafana.
The collected Trace data is stored in Elasticsearch for analysis in Jaeger. The Jaeger UI is embed in the Universal Controller.
Jaeger, Prometheus and Grafana are selected for this Get Started Guide as examples. Any other data store or analysis tool could also be used.
Metrics
Metrics data can be collected from Universal Controller, Universal Agent, OMS and Universal Tasks of type Extension.
Metrics data is pulled through the Prometheus metrics Web Service endpoint (Metrics API) and via user-defined Universal Event Open Telemetry metrics, which is exported to an Open Telemetry metrics collector (OTEL Collector).
The collected Metrics data is exported to Prometheus for analysis in Grafana.
To enable Open Telemetry metrics, an Open Telemetry (OTEL) collector with a Prometheus exporter need to be configured.
Trace
Universal Controller will manually instrument Open Telemetry trace on Universal Controller (UC), OMS, Universal Agent (UA), and Universal Task Extension interactions associated with task instance executions, agent registration, and Universal Task of type Extension deployment.
The collected Trace data is stored in Elasticsearch for analysis in Jaeger.
To enable tracing an Open Telemetry span exporter must be configured.
Prerequisites
The sample set will be done on a single on-premise Linux server.
Server Requirements
- Linux Server
- Memory: 16GB RAM
- Storage: 70GB Net storage
- CPU: 4 CPU
- Distribution: Any major Linux distribution
- For the installation and configurations of the required Observability tools Administrative privileges are required
- Ports
The Following default ports will be used.
Application | Port |
---|---|
Prometheus | http: 9090 |
Grafana: | http:3000 |
Jaeger | http:16686 |
Elastic | http:9200 |
OTEL Collector | 4317 (grpc), 4318 (http) |
Pre-Installed Software Components
It is assumed that following components are installed and configured properly:
- Universal Agent 7.5.0.0 or higher
- Universal Controller 7.5.0.0 or higher
Please refer to the documentation for Installation and Applying Maintenance
and Universal Agent UNIX Quick Start Guide for further information on how to install Universal Agent and Universal Controller.
Required Software for Observability
The following Opensource Software needs to be installed and configured for use with Universal Automation Center.
Note: This Startup Guide has been tested with the provide Software Version in the table below.
Software | Version | Linux Archive |
---|---|---|
elasticsearch | 7.17.12 | https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.12-linux-x86_64.tar.gz |
otelcol-contrib | 0.86.0 | https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.86.0/otelcol-contrib_0.86.0_linux_amd64.tar.gz |
jaeger all in one | 1.49.0 | https://github.com/jaegertracing/jaeger/releases/download/v1.49.0/jaeger-1.49.0-linux-amd64.tar.gz |
prometheus | 2.47.1 | https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz |
grafana-enterprise | 10.1.4 | https://dl.grafana.com/enterprise/release/grafana-enterprise-10.1.4.linux-amd64.tar.gz |
Configuration
Open Source Setup
It is important to follow the installation in the here given order, because the Software components have dependencies between each other.
Example:
- Jaeger needs Elasticsearch to store the trace data.
- OTEL Collector needs Prometheus to store the metrics data.
- Grafana needs Prometheus as data source for displaying the dashboards
Set up Elasticsearch
Description:
Elasticsearch is a distributed, RESTful search and analytics engine designed for real-time search and data storage. It is used for log and event data analysis, full-text search, and more.
In this set-up Elasticsearch is used as the storage backend for Jaeger.
Installation Steps:
Follow the official documentation to install Elasticsearch on your Linux Server.
Official Documentation: Elasticsearch Installation Guide
Install the Version listed in chapter Required Software for the Observability.
Configuration File:
- elasticsearch.yml: Main configuration file for Elasticsearch, containing cluster, node, network, memory, and other settings.
No adjustments to the default elasticsearch.yml file are required.
# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # #cluster.name: my-application # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # #node.name: node-1 # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # #path.data: /path/to/data # # Path to log files: # #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # By default Elasticsearch is only accessible on localhost. Set a different # address here to expose this node on the network: # #network.host: 192.168.0.1 # # By default Elasticsearch listens for HTTP traffic on the first free port it # finds starting at 9200. Set a specific HTTP port here: # #http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # #discovery.seed_hosts: ["host1", "host2"] # # Bootstrap the cluster using an initial set of master-eligible nodes: # #cluster.initial_master_nodes: ["node-1", "node-2"] # # For more information, consult the discovery and cluster formation module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true # # ---------------------------------- Security ---------------------------------- # # *** WARNING *** # # Elasticsearch security features are not enabled by default. # These features are free, but require configuration changes to enable them. # This means that users don’t have to provide credentials and can get full access # to the cluster. Network connections are also not encrypted. # # To protect your data, we strongly encourage you to enable the Elasticsearch security features. # Refer to the following documentation for instructions. # # https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html
Test the Installation:
# Check default Port: ss -tuln | grep 9200 # Result: tcp LISTEN 0 128 [::ffff:127.0.0.1]:9200 *:* # Check Eleasticserach is running curl -XGET "http://127.0.0.1:9200" # Result: { "name" : "wiesloch", "cluster_name" : "elasticsearch", "cluster_uuid" : "tFSoPN8lT1yS4_hEv6nzzQ", "version" : { "number" : "7.17.12", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "e3b0c3d3c5c130e1dc6d567d6baef1c73eeb2059", "build_date" : "2023-07-20T05:33:33.690180787Z", "build_snapshot" : false, "lucene_version" : "8.11.1", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
Setup up Jaeger
Description:
Jaeger is an open-source distributed tracing system used for monitoring and troubleshooting microservices-based applications.
In this set-up Universal Controller will manually instrument Open Telemetry trace on Universal Controller (UC), OMS, Universal Agent (UA), and Universal Task Extension interactions associated with task instance executions, agent registration, and Universal Task of type Extension deployment.
The collected Trace data is stored in Elasticsearch for analysis in Jaeger.
Installation Steps:
Follow the official documentation to install Jaeger on your Linux Server.
Official Documentation: Jaeger documentation (jaegertracing.io)
Install the Version listed in chapter Required Software for the Observability.
For a quick local installation the Jaeger all-in-one executable can be used. It includes the Jaeger UI, jaeger-collector, jaeger-query, and jaeger-agent, with an in memory storage component.
Configuration:
When starting the jaeger-all-in-one application the following command-line argument need to be set:
--collector.otlp.grpc.host-port :14317
: This is a command-line argument passed to the Jaeger all-in-one binary to configure the host and port for the gRPC OTLP ( OpenTelemetry Protocol) endpoint. It specifies that the gRPC OTLP endpoint should listen on port 14317.--collector.otlp.http.host-port :14318
: This is another command-line argument that configures the host and port for the HTTP OTLP endpoint, specifying port 14318.
Example:
./jaeger-all-in-one --collector.otlp.grpc.host-port :14317 --collector.otlp.http.host-port :14318 ..
Test the Installation:
# Check default Port: ss -tuln | grep 16686 # Result: tcp LISTEN 0 128 *:16686 *:*
Test that the Jaeger GUI is accessible: http://<hostname>:16686/search
Setup OTEL Collector
Description:
OpenTelemetry Collector is a vendor-agnostic observability data collector that gathers traces, metrics, and other telemetry data from various sources and sends it to different backends for analysis.
In this set-up OpenTelemetry collects Metrics data from Universal Controller, Universal Agent, OMS and Universal Tasks of type Extension.
Installation Steps:
Follow the official documentation to install OpenTelemetry on your Linux Server.
Official Documentation: OpenTelemetry Collector Installation
Install the Version listed in chapter Required Software for the Observability.
Configuration Files:
- otel-collector-config.yaml, Primary configuration file, which specifies how the collector should receive, process, and export telemetry data.
Let's break down the key sections and settings in this configuration
- Receiver: For UAC the HTTP, GRPC receiver for the OpenTelemetry Collector needs to be configured. The HTTP port (4318) or GRPC port (4317) should match the ports configured in the 'omss.conf' and 'uags.conf' files.
- Exporters: The following Exporters are configured to send telemetry data to: logging, Prometheus, and Jaeger
- Pipelines: Two pipelines are configured:
- traces: otlp → batch → jaeger
- metrics: otlp → batch → prometheus
The following provides a sample otel-collector-config.yaml file.
Note: In the omss.conf and uags.conf file the port needs to be set to 4318. refer to Configuring OTLP Endpoint to configure the omss.conf and uags.conf port for OpenTelemetry.
# otel-collector-config.yaml # the http port 4318 (default) or grpc port 4317 should be the same as in the omss.conf and uags.conf receivers: otlp: protocols: http: #tls: #cert_file: ue-cert.crt #key_file: ue-cert.key #endpoint: 0.0.0.0:4318 grpc: #endpoint: 0.0.0.0:4317 exporters: logging: verbosity: detailed prometheus: endpoint: 0.0.0.0:8881 #tls: #ca_file: "/path/to/ca.pem" #cert_file: "/path/to/cert.pem" #key_file: "/path/to/key.pem" #namespace: UAgent #const_labels: #label1: value1 #"another label": spaced value #send_timestamps: true #metric_expiration: 180m #enable_open_metrics: true #add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true jaeger: endpoint: localhost:14250 tls: insecure: true processors: batch: service: pipelines: traces: receivers: [otlp] processors: [batch] exporters: [jaeger] metrics: receivers: [otlp] processors: [batch] exporters: [prometheus]
Test the Installation:
# Test: ps aux | grep otelcol-contrib # Result: stonebr+ 25926 0.3 0.7 854136 119616 ? Sl 12:55 0:14 ./otelcol-contrib --config=otel-collector-config.yaml
Set up Prometheus
Description:
Prometheus is an open-source monitoring and alerting toolkit designed for reliability and scalability. It collects metrics from monitored targets, stores them, and provides powerful querying and alerting capabilities.
In this set-up Prometheus is used to store the Metrics data retrieved via Opentelemetry and the Universal Controller Metrics REST API.
Installation Steps:
Follow the official documentation to install Prometheus on your Linux Server.
Official Documentation: Prometheus Installation Guide
Install the Version listed in chapter Required Software for the Observability.
Configuration Files:
- prometheus.yml: Main configuration file for Prometheus, defining scrape targets (what to monitor), alerting rules, and other settings.
In the prometheus.yaml configuration file for UAC the following scrape jobs are defined what metrics to collect and from where.
- OTelCollector' job, Prometheus collects metrics from the target
127.0.0.1:8881
, which corresponds to the OpenTelemetry Collector's OTLP endpoint. - prometheus job, with the the
/metrics
endpoint provides internal metrics related to the Prometheus server's performance and resource utilization. - controller job, Prometheus collects data via the Universal Controller Webservice Metrics API. Replace
'ops.admin'
and'xxx'
with the actual username and password required to access the metrics.
# my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Alertmanager configuration alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first_rules.yml" # - "second_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. # metrics_path defaults to '/metrics' # scheme defaults to 'http'. - job_name: 'OTelCollector' static_configs: - targets: ["127.0.0.1:8881"] #- job_name: "OTelCollector" #static_configs: #- targets: ["127.0.0.1:8888"] - job_name: 'prometheus' metrics_path: /metrics static_configs: - targets: - localhost:9090 - job_name: 'controller' basic_auth: username: 'ops.admin' password: 'canton123' metrics_path: '/uc/resources/metrics' # The correct path where the metrics are exposed by the web service. Note for cloud controller: '/resources/metrics' static_configs: - targets: - 'localhost:8080' # Use the correct hostname or IP address and port of your web service.
Test the Installation:
# Check default Port: ss -tuln | grep 9090 # Result: tcp LISTEN 0 128 *:9090 *:*
Test that the prometheus GUI is accessibly: http://<hostname>:9090
Set up Grafana
Description:
Grafana is an open-source platform for monitoring and observability that allows you to create, explore, and share dynamic dashboards and visualizations for various data sources, including time series databases.
In the this set-up Grafana is used to Visualize and Analyze the Metrics data store in Prometheus data source (time series database).
Installation Steps:
Follow the official documentation to install Grafana on your Linux Server.
Install the Version listed in chapter Required Software for the Observability.
Configuration Files:
- grafana.ini: Grafana's main configuration file, including database connections, server settings, and global configurations.
- datasources.yaml: Configuration for data sources (e.g., Prometheus) that Grafana connects to.
- dashboards: Grafana dashboards are often defined as JSON files that can be imported into Grafana.
Official Documentation: Grafana Installation Guide
Test the Installation:
http://<hostname>:3000/
Universal Controller
Description:
Universal Controller
Installation Steps:
Update Universal Controller Properties
The following uc.properties need to be set in order to enable metrics and traces from Universal Controller:
Name | Description |
---|---|
uc.otel.exporter.otlp.metrics.endpoint | The OTLP metrics endpoint to connect to. Must be a URL with a scheme of either http or https based on the use of TLS. Default is http://localhost:4317 when protocol is grpc, and http://localhost:4318/v1/metrics when protocol is http/protobuf. |
uc.otel.exporter.otlp.traces.endpoint | The OTLP traces endpoint to connect to. Must be a URL with a scheme of either http or https based on the use of TLS. Default is http://localhost:4317 when protocol is grpc, and http://localhost:4318/v1/traces when protocol is http/protobuf |
Please refer to the uc.properties documentation for a list of all configuration options.
Sample Configuration Files
The following provides a minimum uc.properties file:
uc.properties # Enable metrics and trace from UC Controller # The OTLP traces endpoint to connect to (grpc): uc.otel.exporter.otlp.traces.endpoint http://localhost:4317 # The OTLP metrics endpoint to connect to (grpc): uc.otel.exporter.otlp.metrics.endpoint http://localhost:4317
Update Universal Controller GUI
# in UC GUI properties
- open telemetry visualization URL: http://<hostname>:16686/trace/${traceId}?uiFind=${spanId}&uiEmbed=v0
- open telemetry visualization Iframe : True
Official Documentation: link to uc.properties open telemetry properties.
Test Universal Controller GUI with Jaeger embedded
Select the Details of Task Instance → Show Trace.
The embedded Jaeger UI should be displayed.
Jaeger UI embedded in Universal Controller GUI:
Universal Agent
Description:
The following describes the steps to enable tracing and metrics for UAG and OMS Server.
The here described set-up use http protocol. In addition to supporting HTTP (default), HTTPS is also supported.
Refer to the documentation on how to Enable and Configure SSL/TLS for OMS Server and UAG:
Installation Steps:
Enabling Metrics/Traces
Metrics and Traces will be turned off, by default, in both UAG and OMS Server. The user must configure two new options to enable metrics and traces.
Metrics:
Component | Configuration File Option |
---|---|
UAG | otel_export_metrics YES |
OMS Server | otel_export_metrics YES |
Traces:
Component | Configuration File Option |
---|---|
UAG | otel_enable_tracing YES |
OMS Server | otel_enable_tracing YES |
Configure Service Name
All applications using Opentelemetry must register a service.name, including UAG and OMS Server
Component | Configuration File Option |
---|---|
UAG | otel_service_name <agent_name> |
OMS Server | otel_service_name <oms_agent_name> |
Configuring OTLP Endpoint
Both the metrics and tracing engines end up pushing the relevant data to the Opentelemetry collector using the HTTP(S) protocol (gRPC protocol
NOT supported this release). In most scenarios, the traces and metrics will be sent to the same collector, but this is not strictly necessary. To
account for this, two new options will be added in both UAG and OMS
Metrics:
Component | Configuration File Option |
---|---|
UAG | otel_metrics_endpoint http://localhost:4318 |
OMS Server | otel_metrics_endpoint http://localhost:4318 |
Traces:
Component | Configuration File Option |
---|---|
UAG | otel_trace_endpoint http://localhost:4318 |
OMS Server | otel_trace_endpoint http://localhost:4318 |
Configure how often to export the metrics from UAG and OMS Server
Component | Configuration File Option |
---|---|
UAG | otel_metrics_export_interval 60 |
OMS Server | otel_metrics_export_interval 60 |
The value:
Opentelemetry default of 60 seconds
is specified in seconds
must be non-negative (i.e. >0)
cannot exceed 2147483647
Sample Configuration Files
The following provides the sample set-up for UAG and OMS Server.
The otel_metrics_export_interval is not set. The default value of 60s is taken in that case.
# /etc/universal/uags.conf: otel_export_metrics YES otel_enable_tracing YES otel_service_name agt_lx_wiesloch_uag otel_metrics_endpoint http://localhost:4318 otel_trace_endpoint http://localhost:4318
# /etc/universal/omss.conf: otel_export_metrics YES otel_enable_tracing YES otel_service_name agt_lx_wiesloch otel_metrics_endpoint http://localhost:4318 otel_trace_endpoint http://localhost:4318
Note: After adjusting uags.conf and omss.conf restart the Universal Agent.
sudo /opt/universal/ubroker/ubrokerd restart
Official Documentation: Links to OMS and UAG open telemetry configuration options.
Universal Automation Center Observability Tutorials
The first tutorial will explain how to collect Metrics Data from the different UAC components and display the collected data in a Grafana Dashboard using Prometheus as the Datasource.
The second tutorial will be explain how to collect Trace Data from the differed UAC components and display the collected data in a Grafana Dashboard using Jaeger as Datasource.
Metrics data can be collected from Universal Controller, Universal Agent, OMS and Universal Tasks of type Extension.
Metrics data is pulled through the Prometheus metrics Web Service endpoint (Metrics API) and via user-defined Universal Event Open Telemetry metrics, which is exported to an Open Telemetry metrics collector (OTEL Collector).
The collected Metrics data is exported to Prometheus for analysis in Grafana.
To enable Open Telemetry metrics, an Open Telemetry (OTEL) collector with a Prometheus exporter will be configured.
Follow along with the video as you complete the tutorials:
Tutorial 1: Metrics Data Collection and Analysis using Grafana
Installation
Prerequisites
The sample set will be done on a single on-premise Linux server.
Server Requirements
- Linux Server
- Memory: 16GB RAM
- Storage: 70GB Net storage
- CPU: 4 CPU
- Distribution: Any major Linux distribution
- For the installation and configurations of the required Observability tools, Administrative privileges are required
- Ports
The Following default ports will be used.
Application | Port |
---|---|
Prometheus | http:9090 |
Grafana | http:3000 |
OTEL Collector | 4317 (grpc), 4318 (http) |
Pre-Installed Software Components
The following components must be installed and configured properly:
- Universal Agent 7.5.0.0 or higher
- Universal Controller 7.5.0.0 or higher
Please refer to the documentation for Installation and Applying Maintenance
and Universal Agent UNIX Quick Start Guide for further information on how to install Universal Agent and Universal Controller.
Required Software for the Observability - Metrics
For this tutorial the following Opensource Software needs to be installed and configured for use with Universal Automation Center.
This Tutorial has been tested with the provided Software Versions in the table below.
Software | Version | Linux Archive |
---|---|---|
otelcol-contrib | 0.82.0 | https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.82.0/otelcol-contrib_0.82.0_linux_amd64.tar.gz |
prometheus | 2.47.1 | https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz |
grafana-enterprise | 10.1.4 | https://dl.grafana.com/enterprise/release/grafana-enterprise-10.1.4.linux-amd64.tar.gz |
Installation Steps
- Install Prometheus
- Install OpenTelemetry Collector
- Install Grafana
- Configure Universal Agents to send metrics to OpenTelemetry
- Update Universal Controller (uc.properties) to send metrics to OpenTelemetry
- Configure a sample Dashboard in Grafana (add Prometheus datasource, create visualization)
- Optionally configure Grafana for TLS
The required applications Prometheus, OpenTelemetry Collector and Grafana will be installed for this tutorial in the home directory of a Linux user with sudo permissions.
Replace the sample Server IP (192.168.88.17) and Name (wiesloch) with your Server IP and Hostname.
1. Install Prometheus
Prometheus will be configured to store Metrics from OTelCollector and Universal Controller
Download prometheus to your home directory and unpack it
Create Softlink
Adjust config file: prometheus.yml
Create Start Script
Create Stop Script
Start/Stop Prometheus
Checks
2. Install OpenTelemetry Collector
opentelemetry-collector will be configured to export metrics to Prometheus
Download opentelemetry-collector to your home directory and unpack it
Adjust config file: otel-collector-config.yaml
Create Start Script
Create Stop Script
Start/Stop
Checks
3. Install Grafana
Download Grafana to your home directory and unpack it
Create Softlink
Create Start Script
Create Stop Script
Start/Stop Grafana
Checks
4. Configure Universal Agents to send metrics to open-telemetry
Configure uags.conf and omss.conf of the Universal Agent
Start/Stop
Checks
5. Update Universal Controller (uc.properties) to send metrics to open-telemetry
Update uc.properties file
Official Documentation: link to uc.properties open telemetry properties.
Start/ Stop Tomcat
Checks
6. Configure a sample Dashboard in Grafana (add prometheus datasource, create visualization)
In the following example, a Grafana Dashboard with one visualization showing the OMS Server Status will be configured.
The following Steps need to be performed:
- Log-in to Grafana
- Add prometheus as data source for Grafana
- Create a new Dashboard and add a new visualization to it
- Configure visualization
- Display Dashboard
Log-in to Grafana
URL: http://192.168.88.17:3000/ user:admin pass:admin
Add prometheus as data source for Grafana
Datasource: Prometheus - http://192.168.88.17:9090 ( adjust the sample IP to your Server IP Address or Hostname )
Test the connection:
Create a new Dashboard and add a new visualization to it
Configure Visualization
- Select Prometheus as Data Source
- Select the Metric uc_oms_server_status
- Enter a Title and Description e.g. OMS Server Status
- In the Legend Options enter {{instance}}
Display Dashboard
7. Optionally configure grafana for TLS
TLS for Grafana
Tutorial 2: Traces Data Collection and Analysis using Grafana and Jaeger
This tutorial will show how to collect and visualize traces from the different UAC components in Grafana and Jaeger.
This tutorial requires that all configuration steps from the first tutorial have already been performed.
After finishing this Tutorial, you will be able to collect and display Metrics and Traces in Grafana and show Jaeger Traces embedded in the Universal Controller UI.
Universal Controller will manually instrument Open Telemetry trace on Universal Controller (UC), OMS, Universal Agent (UA), and Universal Task Extension interactions associated with task instance executions, agent registration, and Universal Task of type Extension deployment.
To enable tracing, an Open Telemetry span exporter must be configured.
The collected Trace data is stored in Elasticsearch for analysis in Jaeger and Grafana.
The following outlines the architecture:
Installation
Prerequisites
This tutorial requires that all configuration steps from the first tutorial have been already performed.
Server Requirements
the same Linux Server as in the first part of the Tutorial will be used.
The following additional ports need to be opened:
Application | Port |
---|---|
Elastic Search | http:9200 |
Jaeger | http:16686 |
Pre-Installed Software Components
This tutorial requires the following software, installed during the first Tutorial.
Software | Version | Linux Archive |
---|---|---|
Universal Controller | 7.5.0.0 or higher | Download via Stonebranch Support Portal |
Universal Agent | 7.5.0.0 or higher | Download via Stonebranch Support Portal |
otelcol-contrib | 0.82.0 | https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.82.0/otelcol-contrib_0.82.0_linux_amd64.tar.gz |
prometheus | 2.47.1 | https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz |
grafana-enterprise | 10.1.4 | https://dl.grafana.com/enterprise/release/grafana-enterprise-10.1.4.linux-amd64.tar.gz |
Required Software for Observability - Traces
To add support for traces, the following Opensource Software needs to be installed.
Software | Version | Linux Archive |
---|---|---|
elasticsearch | 7.17.12 | https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.12-linux-x86_64.tar.gz |
jaeger | 1.47.0 | https://github.com/jaegertracing/jaeger/releases/download/v1.47.0/jaeger-1.47.0-linux-amd64.tar.gz |
Installation Steps
- Install Elasticsearch
- Install Jaeger
- Add a pipeline for traces in OpenTelemetry Collector
- Enable Tracing in Universal controller
- Enable Tracing in Universal Agents
- Start/Stop Script for all Applications
- Configure a sample Dashboard for traces in grafana ( add jaeger as datasource, create visualization, view trace)
The required applications Elasticsearch, Prometheus, OpenTelemetry Collector, Jaeger and Grafana will be installed for this tutorial in the home directory of a Linux user with sudo permissions.
Replace the sample Server IP (192.168.88.17) and Name (wiesloch) with your Server IP and Hostname.
1. Install Elasticsearch
Download Elasticsearch to your home directory and unpack it
Adjust jvm options in jvm_heap_size.options
Create Start Script
Create Stop Script
Start/Stop
Checks
2. Install Jaeger
Download jaeger to your home directory and unpack it
Add new config file: jaeger-config.yaml
Create Start Script
Create Stop Script
Start/Stop
Checks
3. Add a pipeline for traces in OpenTelemetry Collector
Adjust otel-collector-config.yaml
Create Start Script
The same start Script as in the first part of the Tutorial should be used.
Create Stop Script
The same stop Script as in the first part of the Tutorial should be used.
Start/Stop
Checks
4. Enable Tracing in Universal controller
Update uc.properties
Official Documentation: link to uc.properties open telemetry properties.
Start/ Stop Tomcat
Checks
Update Universal Controller
Enable Open Telemetry Visualization in IFrame and add the Open Telemetry Visualization URL to Point to Jaeger or optionally to Grafana
Jaeger to visualize Traces
UC Controller Properties:
- open telemetry visualization URL for Jaeger: http://wiesloch:16686/trace/${traceId}?uiFind=${spanId}&uiEmbed=v0
- open telemetry visualization Iframe : True
Optionally use Grafana to visualize Traces
To embed the Grafana dashboard in Universal Controller, you need to enable in the grafana.ini file "allow_embedding = true", as well as changing "cookie_samesite = disabled" to allow users to log in and to stay logged in the grafana page
Grafana default.ini
UC Controller Properties:
- open telemetry visualization URL for Grafana: https://192.168.88.17:3000/explore?left={"datasource":"yourjaegeruid","queries":[{"refId":"A","query":"${traceId}"}]}
- open telemetry visualization Iframe : True
Note: the yourjaegeruid can be looked up in the URL Grafana for the Jaeger Datasource (see Screenshot below).
5. Enable Tracing in Universal Agents
Configure uags.conf and omss.conf of the Universal Agent
Start/Stop
Checks
6. Start/Stop Script for all Applications
It is important to start the applications in the correct order, because the Software components have dependencies between each other.
Example:
- Jaeger needs Elasticsearch to store the trace data.
- OTEL Collector needs Prometheus to store the metrics data.
- Grafana needs Prometheus as data source for displaying metrics data in the dashboard
- Grafana needs Jaeger as data source for displaying traces in the dashboard
The following provides the correct order to start/stop all application
Startup All
Stop All
Startup Script to start all Applications
Stop Script to stop all Applications
Start/Stop All Applications
7. Configure a sample tracing dashboard in Grafana (add jaeger data source, create visualization, view trace)
In this example, a Grafana dashboard with one tracing visualization showing incoming traces from the controller will be configured
The following steps need to be performed:
- Log-in to Grafana
- Add Jaeger as a data source for Grafana
- Create or access a dashboard and add a visualization
- Configure the visualization with the jaeger data source set
- Click on a trace to open detailed information about the trace
Log-in to Grafana
URL: http(s)://192.168.88.17:3000/
The default username and password is: user: admin password: admin
Add jaeger as a data source for Grafana
Go to the data sources tab and choosing jaeger as a data source
The default server url for jaeger is: http://192.168.88.17:16686/
At the bottom test the connection and save the data source if it connected succesfully
Head to the dashboards tab and create/access a dashboard and add a visualization to the dashboard
Configure the visualization: (Examples)
- Select the jaeger data source
- Select the "search" tab for a general view of traces or the "traceid" tab for a specific trace
- Choose the controller as the service name and choose "all" operation names
- Add the "sort by" transformation set to start time
- Select that all tooltips should be shown
- Save the visualization
Click on a given trace and open the link in a new tab
The trace will now be shown in a detailed view
Traces can also be viewed using the "show trace" option in the controller:
Go to the properties page and set "Open Telemetry Visulization In IFrame" to "true"
Set the value for the "Open Telemetry Visualization URL" to the Grafana URL: "http(s)://192.168.88.17:3000/explore?left={"datasource":"yourjaegeruid","queries":[{"refId":"A","query":"${traceId}"}]}"
Head over to a task and right-click on it. Choose the "show trace" under the "details" tab
A new browser tab will open with the given task trace inside of the Grafana explore section
Example Widgets inside of Grafana
Grafana has no sharing options for Dashboards or Widgets. Copying the JSON model of one of the widgets will result in a "datasource not found" error. This can be solved by choosing the same metric in the metric picker and clicking the "run query" button.
Widgets for System data (Agents, OMS)
Number of Agents connected
Description:
This Widget shows the number of Agents connected and has an indicator for the upper limit of how many Agents can connect to the controller.
The upper limit depends on the number of licenses the controller owns.
This Widget uses the “Time series” configuration to give a real time update on the Agent status.
Configuration:
The Widget is constructed using 2 metrics derived from the controller:
The first query is the “uc_license_agents_distributed_max” metric which will show the maximum amount of licenses available, which is used to show the upper limit graph.
The second query is the “uc_license_agents_distributed_used” metric which shows the amount of Agents currently connected to the controller.
Seen below is an example of the configuration used to make it an opaque graph that runs through the time series.
Below are the 2 PromQL lines used to configure the queries
uc_license_agents_distributed_max
uc_license_agents_distributed_used
OMS Server Status
Description:
OMS Server Status shown in a "Status History" graph. Depending on the number of OMS server connected the graph changes to represent them.
The graph will also show the different states an OMS server can be.
Configuration:
This query is made from the “uc_oms_server_status” metric using the code
sum by(instance) (uc_oms_server_status)
The metric can send 3 different types of values depending on the OMS status. 1 for “running”, 0 for “not running”, -1 for “in doubt”.
To ensure the Widget shows this information we change add value mappings for the different states the server can be.
Active OMS Server Client connections
Description:
Widget that shows how many Clients are connecting to an OMS server. It will count the connections from agents and controller that connect to the OMS server.
Configuration:
This query uses the “ua_active_connections” metric to read out the number of active connections to all OMS server and showing them using the “Stats” graph.
To set up the Widget, select the metric using the metrics browser or paste the following line in the code builder of Grafana:
sum by(instance) (ua_active_connections)
As more OMS servers are sending metrics to the OTelCollector, the stats graph will update to represent them.
Furthermore on the settings on the right side under the “Value mappings” tab we add a note that says "0 → No active connections"
Widgets for observing Tasks and Task statuse
Tasks started in a set time period
Description:
This “Stat” graph is showing how many tasks have been created in a time period that can be specified. This example shows the Tasks from a 24h time period.
Configuration:
To create this Widget, we use the “uc_history_total” metric to receive all the data from tasks of the universal controller and universal agent.
When creating the query, use the metric browser to find the “uc_history_total” metric and choose the operations “Increase” from the “Range functions” tab and the “Sum” from the Aggregations tab.
Label the “Increase” range to the specified time period you wish to observe (in the example, 24h) and set the “Sum by” label to “task_type”.
The code builder should now look like this:
sum by(task_type) (increase(uc_history_total[24h]))
Task duration split of tasks launched in a time period
Description:
This “Stat” graph is showing how many tasks have been created in a time period that can be specified. This example shows the Tasks from a 24h time period.
Configuration:
To create this Widget, we use the “uc_history_total” metric to receive all the data from tasks of the universal controller and universal agent.
When creating the query, use the metric browser to find the “uc_history_total” metric and choose the operations “Increase” from the “Range functions” tab and the “Sum” from the Aggregations tab.
Label the “Increase” range to the specified time period you wish to observe (in the example 24h) and set the “Sum by” label to “task_type”.
The code builder should now look like this:
You can also add this line of code directly into the code tab to receive the settings:
sum by(task_type) (changes(uc_task_instance_duration_seconds_bucket[24h]))
It is important that in the standard options tab of the general settings the unit is set to “duration (s)” and the decimal point is set to at least 1 decimal point for more accuracy.
Successful/Late Finish ratio shown in a Pie chart of a given Task type
Description:
Pie chart which shows the percentage of "Late Tasks" in reference to the total amount of tasks (Last 1h in this example; Linux Tasks in this example).
Configuration:
This pie chart is made up of 2 queries that will represent the ratio of “Late Finish” Tasks and “successful” Tasks.
The first query is made from using the “uc_task_instance_late_finish_total” metric and using a label filter on the specified Task we want to observe.
Using the Operator “Delta” gives the query a time period to observe the metric data. (In this example it is 1h).
The “Sum by” is set to “task_type” to ensure all metric data of the specified task is displayed.
Using an “Override” we name the query for the pie chart and set a color.
The second query is made up of the “uc_history_total” and the “uc_task_instance_late_finish_total” metric and subtracting them the “Late Finish” tasks from the total.
Similar to the first query we specify a time period using the “Delta” operator and the “Sum by” operator, as well as set the label filter to the tasks we observe.
Using a “Binary operations with query” operator allows for the second metric to be set as the “uc_task_instance_late_finish_total” metric and set to the same as the first query.
Using the “-” in the operation will result in all tasks being shown once and not be counted a second time for the pie chart.
Using the “Override” we set a color and optionally a name for the pie chart.
The code for the queries is below:
first query
sum by(task_type) (delta(uc_task_instance_late_finish_total[1h]))
second query
sum by(task_type) (delta(uc_history_total[1h])) - sum by(task_type) (delta(uc_task_instance_late_finish_total[1h]))
Widgets for Traces
Bar Chart of incoming traces
Description:
This Widget displays all the traces coming from the universal controller and displaying their duration. Hovering over a trace will give more information about the trace.
Configuration:
The bar chart is taking the “Jaeger” data source and accessing all the traces that come from the universal controller. The query is configured as follows:
Once the query is set up. We need to add a transformation for the graph. Doing to the "transform" tab and choosing "sort by" and sorting by the start time will result in the trace links matching the correct traces.
Going to the general settings tab and changing the X-Axis to the start time and setting the Y-axis to a log10 scale will allow for more visibility.
Changing the Tooltip to show all information allows the user to hover over a trace and inspect it more closely using grafana’s trace tools.
Clicking on the trace link will result in a new tab opening up for detailed views of the trace:
Example widget for universal extensions: Cloud Data Transfer
Max Avg. duration of file transfers
A stat graph showing the maximum average time for a Cloud Data Transfer task.
To configure the query we use the "sum by" and "increase" operators with 2 metrics that are divided by each other. For more clarity an override is added to change the color of the widget.
The code shown above is pasted here:
sum(increase(ue_cdt_rclone_duration_sum{universal_extension_name="ue-cloud-dt"}[24h])) / sum(increase(ue_cdt_rclone_duration_count{universal_extension_name="ue-cloud-dt"}[24h]))
The time interval can be changed to see determine the time period. This example was the max average in 24h.
Important is that under the general settings the calculation is set to the max number. This allows the query to only give the maximum amount of the calculated average.
If the value on the stat graph is not shown in seconds it can help to set the units used to "seconds" this will force the stat graph to show the given value in seconds.