Distributed Tracing
Distributed traces provide a detailed, end-to-end view of a single API request or transaction as it traverses through various services and components. Traces are crucial for understanding the flow of requests and identifying bottlenecks or latency issues. Here’s how you can make use of traces for API observability:- End-to-end request tracing: Implement distributed tracing across your microservices architecture to track requests across different services and gather data about each service’s contribution to the overall request latency.
- Transaction Flow: Visualize the transaction flow by connecting traces to show how requests move through different services, including entry points (e.g., API gateway), middleware and backend services.
- Latency Analysis: Analyze trace data to pinpoint which service or component is causing latency issues, allowing for quick identification and remediation of performance bottlenecks.
- Error Correlation: Use traces to correlate errors across different services to understand the root cause of issues and track how errors propagate through the system.
OpenTelemetry
Starting from Tyk Gateway version 5.2, you can leverage the power of OpenTelemetry, an open-source observability framework designed for cloud-native software. This enhances your API monitoring with end-to-end distributed tracing. At this time, Tyk does not support OpenTelemetry metrics or logging, but we have these on our roadmap for future enhancement of the product. This documentation will guide you through the process of enabling and configuring OpenTelemetry in Tyk Gateway. You’ll also learn how to customize trace detail levels to meet your monitoring requirements. For further guidance on configuring your observability back-end, explore our guides for Datadog, Dynatrace, Jaeger and New Relic. All the configuration options available when using Tyk’s OpenTelemetry capability are documented in the Tyk Gateway configuration guide.Using OpenTelemetry with Tyk
OpenTelemetry support must be enabled at the Gateway level by adding the following to the Tyk Gateway configuration file (typicallytyk.conf):
TYK_GW_OPENTELEMETRY_ENABLED to true.
By default, OpenTelemetry spans are exported to the collector using the
gRPC protocol to localhost:4317. You can choose between HTTP and gRPC protocols by configuring the opentelemetry.exporter field to http or grpc. You can specify an alternative target using the opentelemetry.endpoint control.
Detailed Tracing
You can generate more detailed traces for requests to an API by setting the server.detailedTracing flag in the Tyk Vendor Extension of the API definition. For users of the Tyk Dashboard UI, the OpenTelemetry Tracing option in the Tyk OAS API Designer allows you to set and unset this option for the API.

Understanding The Traces
Tyk Gateway exposes a helpful set of span attributes and resource attributes with the generated spans. These attributes provide useful insights for analyzing your API requests. A clear analysis can be obtained by observing the specific actions and associated context within each request/response. This is where span and resource attributes play a significant role.Span Attributes
A span is a named, timed operation that represents an operation. Multiple spans represent different parts of the workflow and are pieced together to create a trace. While each span includes a duration indicating how long the operation took, the span attributes provide additional contextual metadata. Span attributes are key-value pairs that provide contextual metadata for individual spans. Tyk automatically sets the following span attributes:tyk.api.name: API name.tyk.api.orgid: Organization ID.tyk.api.id: API ID.tyk.api.path: API listen path.tyk.api.tags: If tagging is enabled in the API definition, the tags are added here.
Resource Attributes
Resource attributes provide contextual information about the entity that produced the telemetry data. Tyk exposes following resource attributes:Service Attributes
The service attributes supported by Tyk are:| Attribute | Type | Description |
|---|---|---|
service.name | String | Service name for Tyk API Gateway: tyk-gateway |
service.instance.id and tyk.gw.id | String | The Node ID assigned to the gateway. Example solo-6b71c2de-5a3c-4ad3-4b54-d34d78c1f7a3 |
service.version | String | Represents the service version. Example v5.2.0 |
tyk.gw.dataplane | Bool | Whether the Tyk Gateway is hybrid (slave_options.use_rpc=true) |
tyk.gw.group.id | String | Represents the slave_options.group_id of the gateway. Populated only if the gateway is hybrid. |
tyk.gw.tags | []String | Represents the gateway segment_tags. Populated only if the gateway is segmented. |
Common HTTP Span Attributes
Tyk follows the OpenTelemetry semantic conventions for HTTP spans. You can find detailed information on common attributes here. Some of these common attributes include:http.method: HTTP request method.http.scheme: URL scheme.http.status_code: HTTP response status code.http.url: Full HTTP request URL.
Advanced OpenTelemetry Capabilities
Context Propagation
This setting allows you to specify the type of context propagator to use for trace data. It’s essential for ensuring compatibility and data integrity between different services in your architecture. The available options are:- tracecontext: This option supports the W3C Trace Context format.
- b3: This option serializes
SpanContextto/from the B3 multi Headers format. Here you can find more information of this propagator.
tracecontext. To configure this setting, you have two options:
- Environment Variable: Use
TYK_GW_OPENTELEMETRY_CONTEXTPROPAGATIONto specify the context propagator type. - Configuration File: Navigate to the
opentelemetry.context_propagationfield in your configuration file to set your preferred option.
Sampling Strategies
Tyk supports configuring the following sampling strategies via the Sampling configuration structure:Sampling Type
This setting dictates the sampling policy that OpenTelemetry uses to decide if a trace should be sampled for analysis. The decision is made at the start of a trace and applies throughout its lifetime. By default, the setting isAlwaysOn.
To customize, you can either set the TYK_GW_OPENTELEMETRY_SAMPLING_TYPE environment variable or modify the opentelemetry.sampling.type field in the Tyk Gateway configuration file. Valid values for this setting are:
- AlwaysOn: All traces are sampled.
- AlwaysOff: No traces are sampled.
- TraceIDRatioBased: Samples traces based on a specified ratio.
Sampling Rate
This field is crucial when theType is configured to TraceIDRatioBased. It defines the fraction of traces that OpenTelemetry will aim to sample, and accepts a value between 0.0 and 1.0. For example, a Rate set to 0.5 implies that approximately 50% of the traces will be sampled. The default value is 0.5. To configure this setting, you have the following options:
- Environment Variable: Use
TYK_GW_OPENTELEMETRY_SAMPLING_RATE. - Configuration File: Update the
opentelemetry.sampling.ratefield in the configuration file.
ParentBased Sampling
This option is useful for ensuring the sampling consistency between parent and child spans. Specifically, if a parent span is sampled, all it’s child spans will be sampled as well. This setting is particularly effective when used withTraceIDRatioBased, as it helps to keep the entire transaction story together. Using ParentBased with AlwaysOn or AlwaysOff may not be as useful, since in these cases, either all or no spans are sampled. The default value is false. Configuration options include:
- Environment Variable: Use
TYK_GW_OPENTELEMETRY_SAMPLING_PARENTBASED. - Configuration File: Update the
opentelemetry.sampling.parent_basedfield in the configuration file.
OpenTelemetry Backends for Tracing
Datadog
This guide explains how to configure Tyk API Gateway and the OpenTelemetry Collector to collect distributed traces in Datadog. It follows the reference documentation from Datadog. While this tutorial demonstrates using an OpenTelemetry Collector running in Docker, the core concepts remain consistent regardless of how and where the OpenTelemetry collector is deployed. Whether you’re using Tyk API Gateway in an open-source (OSS) or commercial deployment, the configuration options remain identical.Prerequisites
- Docker installed on your machine
- Tyk Gateway v5.2.0 or higher
- OpenTelemetry Collector Contrib docker image. Make sure to use the Contrib distribution of the OpenTelemetry Collector as it is required for the Datadog exporter.
Steps for Configuration
-
Configure the OpenTelemetry Collector
You will need:
- An API key from Datadog. For example,
6c35dacbf2e16aa8cda85a58d9015c3c. - Your Datadog site. Examples are:
datadoghq.com,us3.datadoghq.comanddatadoghq.eu.
otel-collector.ymlwith the following content: - An API key from Datadog. For example,
-
Configure a test API
If you don’t have any APIs configured yet, create a subdirectory called
appsin the current directory. Create a new fileapidef-hello-world.jsonand copy this very simple API definition for testing purposes: -
Create the Docker-Compose file
Save the following YAML configuration to a file named
docker-compose.yml.To start the services, go to the directory that contains the docker-compose.yml file and run the following command: -
Explore OpenTelemetry traces in Datadog
Begin by sending a few requests to the API endpoint configured in step 2:
http://localhost:8080/hello-world/Next, log in to Datadog and navigate to the ‘APM’ / ‘Traces’ section. Here, you should start observing traces generated by Tyk: Click on a trace to view all its internal spans:
Datadog will generate a service entry to monitor Tyk API Gateway and will automatically compute valuable metrics using the ingested traces.

Troubleshooting
If you do not observe any traces appearing in Datadog, consider the following steps for resolution:- Logging: Examine logs from Tyk API Gateway and from the OpenTelemetry Collector for any issues or warnings that might provide insights.
- Data Ingestion Delays: Be patient, as there could be some delay in data ingestion. Wait for 10 seconds to see if traces eventually appear, as this is the timeout we have configured in the batch processing of the OpenTelemetry collector within step 1.
Dynatrace
This documentation covers how to set up Dynatrace to ingest OpenTelemetry traces via the OpenTelemetry Collector (OTel Collector) using Docker.Prerequisites
- Docker installed on your machine
- Dynatrace account
- Dynatrace Token
- Gateway v5.2.0 or higher
- OTel Collector docker image
Steps for Configuration
-
Generate Dynatrace Token
- In the Dynatrace console, navigate to access keys.
- Click on Create a new key
- You will be prompted to select a scope. Choose Ingest OpenTelemetry traces.
- Save the generated token securely; it cannot be retrieved once lost.
-
Configuration Files
- OTel Collector Configuration File
otel-collector-config.yml. In this file replace<YOUR-ENVIRONMENT-STRING>with the string from the address bar when you log into Dynatrace. Replace<YOUR-DYNATRACE-API-KEY>with the token you generated earlier. Here’s a sample configuration file:- Docker Compose File
-
Testing and Viewing Traces
1. Launch the Docker containers: docker-compose up -d
2. Initialize your Tyk environment.
3. Configure a basic HTTP API on the Tyk Gateway or Dashboard.
4. Use cURL or Postman to send requests to the API gateway.
5. Navigate to Dynatrace -> Services -> Tyk-Gateway.
6. Wait for 5 minutes and refresh. 7. Traces, along with graphs, should appear. If they don’t, click on the “Full Search” button.

-
Troubleshooting
- If traces are not appearing, try clicking on the “Full Search” button after waiting for 5 minutes. Make sure your Dynatrace token is correct in the configuration files.
- Validate the Docker Compose setup by checking the logs for any errors:
docker-compose logs
Elasticsearch
This quick start explains how to configure Tyk API Gateway (OSS, self-managed or hybrid gateway connected to Tyk Cloud) with the OpenTelemetry Collector to export distributed traces to Elasticsearch.Prerequisites
Ensure the following prerequisites are met before proceeding:- Tyk Gateway v5.2 or higher
- OpenTelemetry Collector deployed locally
- Elasticsearch deployed locally or an account on Elastic Cloud with Elastic APM

Steps for Configuration
-
Configure Tyk API Gateway
To enable OpenTelemetry in Tyk API Gateway, follow these steps:
For Tyk Helm Charts:
- Add the following configuration to the Tyk Gateway section:
For Docker Compose:- In your docker-compose.yml file for Tyk Gateway, add the following environment variables:
Make sure to replace<Add your endpoint here>with the appropriate endpoint from your OpenTelemetry collector. After enabling OpenTelemetry at the Gateway level, you can activate detailed tracing for specific APIs by editing their respective API definitions. Set thedetailed_tracingoption to either true or false. By default, this setting is false. -
Configure the OpenTelemetry Collector to Export to Elasticsearch
To configure the OTel Collector with Elasticsearch Cloud, follow these steps:
- Sign up for an Elastic account if you haven’t already
- Once logged in to your Elastic account, select “Observability” and click on the option “Monitor my application performance”

- Scroll down to the APM Agents section and click on the OpenTelemetry tab

- Search for the section “Configure OpenTelemetry in your application”. You will need to copy the value of “OTEL_EXPORTER_OTLP_ENDPOINT” and “OTEL_EXPORTER_OTLP_HEADERS” in your OpenTelemetry Collector configuration file.

- Update your OpenTelemetry Collector configuration, here’s a simple example:
If are running Elasticsearch locally, you will need to use your APM Server endpoint (elastic-apm-server:8200) and set-up a secret token authorization in ElasticSearch. You can refer to the example configuration provided by Elastic for more guidance on the OpenTelemetry Collector configuration. -
Explore OpenTelemetry Traces in Elasticsearch
- In Elasticsearch Cloud:
- Go to “Home” and select “Observability.”

- On the right menu, click on “APM / Services.”
- Click on “tyk-gateway.”
Select a transaction to view more details, including the distributed traces:

New Relic
This guide provides a step-by-step procedure to integrate New Relic with Tyk Gateway via the OpenTelemetry Collector. At the end of this guide, you will be able to visualize traces and metrics from your Tyk Gateway on the New Relic console.Prerequisites
- Docker installed on your machine
- New Relic Account
- New Relic API Key
- Gateway v5.2.0 or higher
- OTel Collector docker image
Steps for Configuration
-
Obtain New Relic API Key
- Navigate to your New Relic Console.
-
Go to
Profile → API keys. -
Copy the key labeled as
INGEST-LICENSE.
Example token:You can follow the official New Relic documentation for more information. -
Configuration Files
OTel Collector Configuration YAML
- Create a file named
otel-collector-config.ymlunder the configs directory. - Copy the following template into that file:
- Replace
<YOUR-ENVIRONMENT-STRING>with your specific New Relic endpoint (https://otlp.nr-data.netfor US orhttps://otlp.eu01.nr-data.netfor EU). - Replace
<YOUR-NEW-RELIC-API-KEY>with the API key obtained in Step 1.
- Create a file named docker-compose.yml at the root level of your project directory.
- Paste the following code into that file:
Replace the variable fields with the relevant data. - Create a file named
-
Testing and Verifying Traces
-
Run
docker-compose up -dto start all services. - Initialize your Tyk environment.
-
Create a simple
httpbinAPI using Tyk Dashboard. You can follow the Tyk Dashboard documentation for more information. - Send requests to the API using cURL or Postman.
- Open New Relic Console.
-
Navigate to
APM & Services → Services - OpenTelemetry → tyk-gateway.

- Wait for about 5 minutes for the data to populate.

If traces are not showing, try refreshing the New Relic dashboard. -
Run
Troubleshooting
- If the traces aren’t appearing, double-check your API key and endpoints.
- Ensure that your Tyk Gateway and New Relic are both running and connected.
Conclusion
You have successfully integrated New Relic with Tyk Gateway via the OpenTelemetry Collector. You can now monitor and trace your APIs directly from the New Relic console.Jaeger
Using Docker
This quick start guide offers a detailed, step-by-step walkthrough for configuring Tyk API Gateway (OSS, self-managed or hybrid gateway connected to Tyk Cloud) with OpenTelemetry and Jaeger to significantly improve API observability. We will cover the installation of essential components, their configuration, and the process of ensuring seamless integration. For Kubernetes instructions, please refer to How to integrate with Jaeger on Kubernetes.Prerequisites
Ensure the following prerequisites are met before proceeding:- Docker installed on your machine
- Gateway v5.2.0 or higher
Steps for Configuration
-
Create the Docker-Compose File for Jaeger
Save the following YAML configuration in a file named docker-compose.yml:
This configuration sets up Jaeger’s all-in-one instance with ports exposed for Jaeger UI and the OTLP receiver.
-
Deploy a Test API Definition
If you haven’t configured any APIs yet, follow these steps:
- Create a subdirectory named apps in the current directory.
- Create a new file named
apidef-hello-world.json. - Copy the provided simple API definition below into the
apidef-hello-world.jsonfile:
This API definition sets up a basic API named Hello-World for testing purposes, configured to proxy requests tohttp://httpbin.org/. -
Run Tyk Gateway OSS with OpenTelemetry Enabled
To run Tyk Gateway with OpenTelemetry integration, extend the previous Docker Compose file to include Tyk Gateway and Redis services. Follow these steps:
- Add the following configuration to your existing docker-compose.yml file:
- Navigate to the directory containing the docker-compose.yml file in your terminal.
- Execute the following command to start the services:
-
Explore OpenTelemetry Traces in Jaeger
- Start by sending a few requests to the API endpoint configured in Step 2:
- Access Jaeger at http://localhost:16686.
- In Jaeger’s interface:
- Select the service named tyk-gateway.
- Click the Find Traces button.
Select a trace to visualize its corresponding internal spans:

Using Kubernetes
This quick start guide offers a detailed, step-by-step walkthrough for configuring Tyk Gateway OSS with OpenTelemetry and Jaeger on Kubernetes to significantly improve API observability. We will cover the installation of essential components, their configuration, and the process of ensuring seamless integration. For Docker instructions, please refer to How to integrate with Jaeger on Docker.Prerequisites
Ensure the following prerequisites are in place before proceeding:Steps for Configuration
-
Install Jaeger Operator
For the purpose of this tutorial, we will use jaeger-all-in-one, which includes the Jaeger agent, collector, query, and UI in a single pod with in-memory storage. This deployment is intended for development, testing, and demo purposes. Other deployment patterns can be found in the Jaeger Operator documentation.
- Install the cert-manager release manifest (required by Jaeger)
- Install Jaeger Operator.
- After the Jaeger Operator is deployed to the
observabilitynamespace, create a Jaeger instance:
-
Deploy Tyk Gateway with OpenTelemetry Enabled using Helm
To install or upgrade Tyk Gateway OSS using Helm, execute the following commands:
Please make sure you are installing Redis versions that are supported by Tyk. Please refer to Tyk docs to get list of supported versions.
jaeger-all-in-one-collector service.
-
Deploy Tyk Operator
Deploy Tyk Operator to manage APIs in your cluster:
-
Deploy a Test API Definition
Save the following API definition as
apidef-hello-world.yaml:To apply this API definition, run the following command:This step deploys an API definition named hello-world using the provided configuration. It enables a keyless HTTP API proxying requests to http://httpbin.org/ and accessible via the path /hello-world. -
Explore OpenTelemetry traces in Jaeger
You can use the kubectl
port-forward commandto access Tyk and Jaeger services running in the cluster from your local machine’s localhost: For Tyk API Gateway:For Jaeger:Begin by sending a few requests to the API endpoint configured in step 2:Next, navigate to Jaeger onhttp://localhost:16686, select the ´service´ called ´tyk-gateway´ and click on the button ´Find traces´. You should see traces generated by Tyk: Click on a trace to view all its internal spans:

OpenTracing (deprecated)
OpenTracing tools with legacy Tyk integration
Enabling OpenTracing
OpenTracing can be configured at the Gateway level by adding the following configuration to your Gateway configuration (typically via thetyk.conf file or equivalent environment variables.
nameis the name of the supported tracerenabled: set this to true to enable tracingoptions: key/value pairs for configuring the enabled tracer. See the supported tracer documentation for more details.
Jaeger
Tyk’s OpenTelemetry Tracing works with Jaeger and we recommend following our guide to use OpenTelemetry with Jaeger rather than the following deprecated Open Tracing method.
tyk.conf on tracing setting
options are settings that are used to initialise the Jaeger client. For more details about the options see client libraries
Sample configuration
New Relic
Tyk’s OpenTelemetry Tracing works with New Relic and we recommend following our guide to use OpenTelemetry with New Relic rather than the following deprecated Open Tracing method.
Configuring New Relic In
tyk.conf under the tracing section
options setting you can set the initialisation of the Zipkin client.
Sample configuration
reporter.url is the URL to the New Relic server, where trace data will be sent to.
Zipkin
Prior to Tyk 5.2, you cannot use OpenTelemetry and so must use OpenTracing with the Zipkin Go tracer to send Tyk Gateway traces to Zipkin. Configuring Zipkin Intyk.conf on tracing setting
options are settings that are used to initialise the Zipkin client.
Sample configuration
reporter.url is the URL to the Zipkin server, where trace data will be sent.