OpenTelemetry for Containerized Environments

OpenTelemetry is an open source project within the Cloud Native Computing Foundation (CNCF) that provides tools, APIs and SDKs to instrument, generate, collect and export telemetry data (metrics, logs and traces) for analysis to understand your software’s performance and behavior.

OpenTelemetry was formed through a merger of two former CNCF projects, OpenTracing and OpenCensus, and seeks to provide a single, standard solution to capturing telemetry data. By providing a unified approach to the problem of telemetry data capture, OpenTelemetry aims to ease the burden of instrumenting your applications and make observability more accessible to all developers.

The importance of OpenTelemetry increases in containerized environments, where applications and services are composed of numerous isolated containers. These environments present a unique challenge for monitoring and observability, as traditional methods often fall short. 

OpenTelemetry, with its robust, standardized approach, is designed to meet these challenges and provide developers with the tools they need to understand their applications’ performance in these complex environments. Get more background on OpenTelemetry in this in-depth blog post.

The Need for Observability in Containerized Systems

Observability is a crucial aspect of system administration and development. It’s the ability to infer internal states of systems based on the system’s outputs. In a containerized environment, observability becomes even more critical due to the distributed nature of services and applications.

The microservices architecture, which is common in containerized systems, involves breaking down an application into multiple smaller services that run in their own containers. These services communicate with each other, often over a network, to provide a unified user experience. While this architecture provides many benefits, such as scalability and resilience, it also introduces complexity. Understanding the performance and behavior of your application requires insight into each individual service and how they interact.

OpenTelemetry provides this much-needed observability in containerized environments. By collecting and analyzing telemetry data from each container and service, developers can track down performance bottlenecks, identify failures and understand how their application behaves in real-world conditions.

Core Components of OpenTelemetry


Application programming interfaces (APIs) define the contract for how your applications interact with OpenTelemetry. They provide a standard way to instrument your code, enabling it to generate telemetry data. The OpenTelemetry APIs are language-specific, meaning there are separate APIs for Java, Python, Go and other programming languages.


Software development kits (SDKs) are libraries that implement the OpenTelemetry APIs. They provide the functionality needed to generate, collect and export telemetry data. Like the APIs, SDKs are language-specific, and developers use the SDK that corresponds to the programming language they are working in.

Instrumentation Libraries

Instrumentation libraries are pre-instrumented pieces of code that you can use in your applications to generate telemetry data. They can be used to instrument standard libraries and frameworks, making it easier for developers to get started with OpenTelemetry. These libraries are often language-specific and cover a wide range of common libraries and frameworks.


Exporters are components that take the telemetry data generated by your applications and send it to a backend system for analysis. These backend systems could be anything from a simple logging service to a complex application performance management (APM) system. OpenTelemetry provides several standard exporters, but you can also create your own if you have specific needs.


Collectors are components that receive, process and export telemetry data. They can be used to aggregate data from multiple sources, apply sampling or filtering and export the processed data to one or more backend systems.

Traces, Metrics, and Logs

Traces, metrics and logs are the three types of telemetry data that OpenTelemetry deals with:

  • Traces track the life cycle of a request as it passes through your system, helping you understand the flow of requests and identify performance bottlenecks.
  • Metrics provide numerical data about your system, such as the number of requests per second or the memory usage of your services. 
  • Logs provide text-based records of events that have occurred in your system, providing context for the other types of telemetry data.

Using OpenTelemetry for Containerized Environments

Here are the general steps involved in implementing OpenTelemetry for containerized environments.

Install OpenTelemetry SDKs in Your Containerized Applications

OpenTelemetry SDKs are the first step toward integrating OpenTelemetry into your containerized applications. These SDKs are language-specific and provide the core functionalities required for collecting telemetry data.

To begin with, you need to choose the appropriate SDK for your application’s programming language. OpenTelemetry currently supports a wide range of languages, including Java, Python, Go, JavaScript and many more. Once you have chosen the appropriate SDK, you can proceed by adding it as a dependency in your application. The exact process may vary depending on the language and the dependency management system you are using.

After adding the SDK, you need to initialize it in your application code. This usually involves creating a Tracer or Meter instance and using it to start spans or record metrics. It may also involve setting up a context propagator, an exporter, and other optional components.

Instrument Your Applications

Once the OpenTelemetry SDK is installed and initialized, the next step is to instrument your applications. Instrumentation involves modifying your application code to generate telemetry data. This data can be in the form of traces, metrics or logs and provides valuable insights into the behavior and performance of your applications.

Traces represent the life cycle of a single operation, like a request or a transaction, as it flows through various components of your application. You can create traces by starting and ending spans at different points in your application code. Each span represents a single operation and contains information like its name, its start and end time, and any associated attributes, events, or links.

Metrics represent quantitative measurements of certain aspects of your application, like the number of requests per second or the CPU usage. You can record metrics by creating and updating instruments at appropriate points in your application code. Each instrument represents a single metric and contains information like its name, its type and any associated labels or values.

Logs represent discrete events that occur during the execution of your application, like an error or a status change. You can generate logs by creating and emitting log records at appropriate points in your application code. Each log record represents a single event and contains information like its timestamp, its severity level and any associated attributes or events.

Configure Context Propagation

Context propagation is a critical aspect of distributed tracing. It allows you to link spans across different components of your application, even if they are running on separate containers or machines. Without context propagation, each span would appear as a separate trace, making it difficult to understand the end-to-end behavior of your application.

OpenTelemetry provides a flexible context propagation mechanism that supports various formats, including W3C Trace Context and B3. To configure context propagation, you need to specify the desired format when initializing the OpenTelemetry SDK in your application code. This usually involves creating a context propagator instance and setting it as the global propagator.

Once context propagation is configured, each outgoing request from your application will carry a trace context header. This header contains the trace ID and the span ID of the current span, along with any associated trace flags or trace state. Similarly, each incoming request to your application should extract this header and use it to start a new span. This way, all the spans related to a single operation can be linked together into a single trace.

Deploy the OpenTelemetry Collector

The OpenTelemetry Collector is a service that receives, processes, and exports telemetry data. You can deploy as a separate component in your containerized infrastructure, typically as a sidecar or daemonset within a Kubernetes cluster.

Deploying the OpenTelemetry Collector in Kubernetes involves creating a configuration file and a deployment manifest. The configuration file defines the pipelines for receiving, processing, and exporting telemetry data. It includes settings for various receivers, processors, exporters, and extensions. The deployment manifest defines the deployment settings for the collector, like the container image, the resource limits, and the environment variables.

Once the OpenTelemetry Collector is deployed, you need to configure your applications to send their telemetry data to it. This usually involves setting the endpoint of the collector as the target of the exporter in your application code. The exact process may vary depending on the language and the OpenTelemetry SDK you are using.

Implement Data Exporting

Data exporting is the process of sending the collected telemetry data to a backend for storage and analysis. OpenTelemetry provides a wide range of exporters that support various backends, including Jaeger, Zipkin, Prometheus, and many more.

To implement data exporting, you need to choose the appropriate exporter for your backend and add it to the OpenTelemetry SDK in your application code. This usually involves creating an exporter instance, setting its configuration options, and registering it with the Tracer or Meter.

Once data exporting is implemented, each span or metric recorded by your applications will be serialized and sent to the backend. From there, you can query, visualize and analyze the data using the tools provided by the backend.

Monitor and Analyze Telemetry Data

With OpenTelemetry integrated into your containerized applications and the telemetry data being exported to your chosen backend, the final step is to monitor and analyze this data.

Monitoring involves setting up dashboards, alerts and other tools to track the performance of your applications in real time. You can create various charts and graphs to visualize the traces, metrics and logs and set up alerts to notify you of any anomalies or issues.

Analysis involves digging deeper into the data to understand the behavior of your applications and identify any potential bottlenecks or issues. You can use various statistical and machine learning techniques to analyze the data, and generate reports or insights that can help you optimize your applications.

In conclusion, OpenTelemetry provides a comprehensive, vendor-neutral solution for instrumenting and monitoring containerized applications. By following the steps outlined in this article, you can effectively integrate OpenTelemetry into your applications and gain valuable insights into their behavior and performance.

Gilad David Mayaan

Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Samsung NEXT, NetApp and Imperva, producing technical and thought leadership content that elucidates technical solutions for developers and IT leadership.

Gilad David Mayaan has 53 posts and counting. See all posts by Gilad David Mayaan