HomeTyk Open Source API Gateway v2.xUpgrading to Tyk v2.2

Upgrading to Tyk v2.2

v2.2 Upgrade Notes

Tyk v2.2 introduces many new features but is almost 100% backwards compatible with the configuration of Tyk v2.0.

The only exception to this rule is for users of streaming endpoints, The flush interval setting is now in milliseconds, not seconds, therefore before upgrading, if you are using flush interval, make sure that the value has been updated.

Making the most of v2.2

One major upgrade in v2.2 of the gateway, pump and dashboard is faster analytics retrieval. Out of the box, Tyk v2.2 will be configured to make use of aggregate analytics records, which could cause your analytics dashboard to seem empty even though there is data in the database.

This is because Tyk Dashboard will attempt to use the aggregate collections by default. You can ensure backwards compatibility by setting an upgrade cutoff in your tyk_analytics.conf, or disabling the feature altogether:

Aggregate Cutoff

A cut-off period is basically a date to set as the baseline for when aggregate analytics data has started to be recorded, any queries which are before this date, will use the raw log data set, while any after this date will use the new aggregated data set, this means that you lose no data, and can take advantage of the new aggregates as they become available over time.

To set a cut off, set the below settings to the dates you want to use as a cut-off:

"enable_aggregate_lookups": true,
"aggregate_lookup_cutoff": "26/05/2016",

To disable the feature (for example, if your pump is not set up for aggregates, then just set enable_aggregate_lookups to false.

v2.0 Upgrade Notes

Tyk 2.0 Introduces some breaking changes, if you are upgrading from Tyk 1.9, there will be some significant differences to how Tyk si configured.

MongoDB is no longer a dependency

In particular, Tyk 2.0 no longer uses MongoDB at all for centralised API Definition and Policy loads, instead, it will query the Tyk Dashboard service.

What does this mean?

This means that you can;t just install tyk 2.0 on top of an existing installation and expect the tyk.conf and tyk_analytics.conf files to work.

The tyk.conf file will need to be modified like so:

"use_db_app_configs": true,
    "db_app_conf_options": {
        "connection_string": "http://dashboard_host:port",
        "node_is_segmented": false,
        "tags": []

Notice the inclusion of the connection_string parameter. This should point at your Dashboard installation.

You will also need to make the respective change in your policies setup:

"policies": {
        "policy_source": "service",
        "policy_connection_string": "http://dashboard_host:port"

The analytics_config section will now raise a warning if set to “mongo” and this setting will be ignored by the loader:

"analytics_config": {
        "type": "mongo", // THIS IS NO LONGER SUPPORTED

How is the service-based loader secured?

The service based loader is secured in a few ways, first off, there is a shgared node secret which must correspond in both the tyk.conf and the tyk_analytics.conf` files:


"node_secret": "node-secret-goes-here",


"shared_node_secret": "node-secret-goes-here",

The shared node secret enables a Tyk node to start up and then register with the dashboard, once a Tyk node has registered, it will be given a node ID from an available pool.

The Tyk node will then use the node ID, the shared secret and a heartbeat to continuously notify the dashboard service that it is running.

The Tyk Node will also use a nonce on each service request to reload, update or hot reload as well as during the heartbeat. This ensures that each request is unique and expected.

What if there is a network partition or the dashboard goes offline?

Nothing, the Tyk Nodes heartbeat will emit a warning, and it will keep trying to notify the dashboard that it is live, when the dashboard becomes visible again, the registration resumes.

If a Tyk node receives a host reload instruction during a partition, it will attempt a reload, since the connection will fail, it will error but the reload will not occur, this means that your APIs will continue to be proxied.

However: If a network partition causes the dashboard service to fail, and a reload instruction occurs, and the node receives a failed data set from the dashboard service (such as a failed registration or access denied – this could be because of a license change), then the node will attempt to re-login and retrieve a new node ID. This should take between 10-20 seconds and is all shown in the log output of the node.

If the node has any APIs loaded, it will continue to proxy them even if it has failed to register with the service dashboard.

Was this article helpful to you? Yes No